Tag: artificial intelligence

  • Krutrim Becomes India’s 1st AI Unicorn; Check All About Ola CEO Bhavish Aggarwal’s Venture |

    New Delhi: On Friday, the Indian AI company Krutrim achieved something big! It became the fastest unicorn in the country and also the first AI unicorn. What does that mean? Well, it closed its first round of funding where investors, including Matrix Partners India, put in $50 million. This investment valued Krutrim at a whopping $1 billion.

    In simpler terms, a lot of people believe in Krutrim’s potential and give them a lot of money to help them grow. Let’s have a look at the details of the company. (Also Read: Now You Can Choose Any Hospital For Treatment; Know All About Game-Changing Rule For Health Insurance)

    The Birth Of Krutrim Si Designs

    Bhavish Aggarwal and Tenneti joined hands to establish Krutrim Si Designs, a tech venture under the umbrella of ANI Technologies Limited, the parent company of Ola Cabs and Ola Electric. (Also Read: No Need For Thermometer! Now This Smartphone Can Measure Your Body Temperature)

    What Is The Meaning Of Word ‘Krutrim’?

    Named after the Sanskrit word for ‘artificial,’ Krutrim is not just a buzzword; it’s a large language model (LLM) that has undergone training on a staggering 2 trillion ‘tokens’ — these are like building blocks of language used in everyday conversations.

    Models Of Krutrim

    Krutrim debuts with a base model, set to hit the market next month. But hold onto your seats because the advanced Krutrim Pro is scheduled for an early release next year, promising cutting-edge capabilities for problem-solving and task execution.

    Language Diversity

    Krutrim comprehends 20 Indian languages and can generate content in 10 of them, including Hindi, Kannada, Marathi, and Telugu. The team proudly asserts that Krutrim outshines even GPT-4 in supporting Indic languages.

    How Krutrim Works?

    Krutrim employs a custom tokenizer to interpret various languages and scripts, making it a versatile linguistic wizard. In head-to-head comparisons with other open-source LLMs trained with similar data volumes, Krutrim emerges victorious across a range of industry-standard benchmarks.

  • CES 2024: List Of AI-Powered Gadgets That Will Blow Your Mind! |

    New Delhi: Stealing the Limelight at Consumer Electronics Show (CES) 2024, artificial intelligence emerges as the star of the show. Cutting-edge advancements and innovations in AI take center stage, captivating attendees with a glimpse into the future of technology. CES 2024 proves to be a testament to the pervasive influence of AI on consumer electronics.

    Let’s dive into the list of top AI-powered products at CES 2024

    Minitailz Smart Dog Collar

    Invoxia has unveiled a new smart collar called Invoxia Minitailz that is suitable for both cats and dogs. The French company has claimed that Minitailz can measure walks, runs, and even daily zoomies for your pets. It will also help in monitoring the pet’s activity, including heart rate and behavior. Notably, the Minitailz Smart Dog Collar earned the CES Best Innovation award in the AI category. (Also Read: WhatsApp May Bring New ‘Meta Verified’ Option For Businesses In Coming Weeks)

    Bmind Smart Mirror

    The world’s first AI-powered smart mirror is designed to improve your mental wellness by identifying your mood and managing stress with AI. Notably, this AI-Powered product represents a new standard for self-care in the digital age.

    Motion Pillow

    The AI-powered smart pillow is designed to curb snoring and give you a better sleep at night. This smart pillow is equipped with many functions such as tracking sleeping data, including snoring time, airbag operation time, sleep score, sleep time, and even recordings of your snoring to play back later. The whole function is operated via a companion app (available for Apple and Android). Notably, a variety of colored pillowcases are available to match different bedroom decors.

    Samsung’s Ballie AI robot

    Ballie has been revamped with new advanced features to help users intelligently navigate their lives. This robot can act like an AI pet in your house, including walking around and coming to you when called. This AI-powered robot is capable of keeping an eye on pets while you are out. It also uses artificial intelligence to learn your habits and offer more personalization. (Also Read: Alexa Gets Three New AI-Powered Skills, You Will Be Surprised To Know)

    Nobi Smart Lamp

    These intelligent lamps are poised to transform the way we monitor and provide care for seniors, offering impressive fall detection and prevention capabilities. The AI-powered Nobi Smart Lamp’s algorithms analyze movement patterns, swiftly recognizing any indicators of a fall incident.

  • ‘AI Girlfriends’ Flood GPT Store Shortly After Launch, OpenAI Rules Breached |

    New Delhi: OpenAI’s recently launched GPT store is encountering difficulties with moderation just a few days after its debut. The platform provides personalized editions of ChatGPT, but certain users are developing bots that violate OpenAI’s guidelines.

    These bots, with names such as “Your AI companion, Tsu,” enable users to personalize their virtual romantic companions, violating OpenAI’s restriction on bots explicitly created for nurturing romantic relationships.

    The company is actively working to address this problem. OpenAI revised its policies when the store was introduced on January 10, 2023. However, the violation of policy on the second day highlights the challenges associated with moderation.

    With the growing demand for relationship bots, it’s adding a layer of complexity to the situation. As reported, seven out of the 30 most downloaded AI chatbots were virtual friends or partners in the United States previous year. This trend is linked to the prevailing loneliness epidemic.

    To assess GPT models, OpenAI states that it uses automated systems, human reviews and user reports to assess GPT models applying warnings or sales bans for those considered harmful. However, the continued presence of girlfriend bots in the market raises doubts about the effectiveness of this assertion.

    The difficulty in moderation reflects the common challenges experienced by AI developers. OpenAI has faced issues in implementing safety measures for previous models such as GPT-3. With the GPT store available to a wide user audience, the potential for insufficient moderation is a significant concern.

     Other technology companies are also swiftly handling problems with their AI systems, understanding the significance of quick action in the growing competition. Yet, the initial breaches highlight the significant challenges in moderation that are expected in the future.

    Even within the specific environment of a specialized GPT store, managing narrowly focused bots seems to be a complicated task. As AI progresses, ensuring their safety is set to become more complex.

  • Annamalai hails PM Modi for using AI tools for speeches: ‘Game changer in Indian politics’ – The Economic Times Video

    Chennai: Tamil Nadu BJP President hailed Prime Minister Narendra Modi on December 18 for using AI translation tools for speeches. He said this would be a game-changer in Indian politics. He said, “We all know Prime Minister is a person who is very close to technology. He has experimented artificial intelligence for a real-time translation of his voice in Hindi to Tamil. I am sure that this is going to be a game changer in the years to come. I am sure this will be a game changing moment in Indian politics, especially in southern parts of India when Prime Minister’s voice can be felt by any common person real-time.”

  • Artificial Intelligence poses danger for Hollywood stunt workers

    By AFP

    LOS ANGELES: Hollywood’s striking actors fear that artificial intelligence is coming for their jobs — but for many stunt performers, that dystopian danger is already a reality.

    From “Game of Thrones” to the latest Marvel superhero movies, cost-slashing studios have long used computer-generated background figures to reduce the number of actors needed for battle scenes.

    Now, the rise of AI means cheaper and more powerful techniques are being explored to create highly elaborate action sequences such as car chases and shootouts — without those pesky (and expensive) humans.

    “The technology is exponentially getting faster and better,” said Freddy Bouciegues, stunt coordinator for movies like “Free Guy” and “Terminator: Dark Fate.”

    “It’s really a scary time right now.”

    Studios are already requiring stunt and background performers to take part in high-tech 3D “body scans” on set, often without explaining how or when the images will be used.

    Advancements in AI mean these likenesses could be used to create detailed, eerily realistic “digital replicas,” which can perform any action or speak any dialogue its creators wish.

    Bouciegues fears producers could use these virtual avatars to replace “nondescript” stunt performers — such as those playing pedestrians leaping out of the way of a car chase.

    “There could be a world where they said, ‘No, we don’t want to bring these 10 guys in… we’ll just add them in later via effects and AI. Now those guys are out of the job.”

    But according to director Neill Blomkamp, whose new film “Gran Turismo” hits theatres on August 25, even that scenario only scratches the surface.

    The role AI will soon play in generating images from scratch is “hard to compute,” he told AFP.

    “Gran Turismo” primarily uses stunt performers driving real cars on actual racetracks, with some computer-generated effects added on top for one particularly complex and dangerous scene.

    But Blomkamp predicts that, in as soon as six or 12 months, AI will reach a point where it can generate photo-realistic footage like high-speed crashes based on a director’s instructions alone.

    At that point, “you take all of your CG (computer graphics) and VFX (visual effects) computers and throw them out the window, and you get rid of stunts, and you get rid of cameras, and you don’t go to the racetrack,” he told AFP.

    “It’s that different.”

    The lack of guarantees over the future use of AI is one of the major factors at stake in the ongoing strike by the Screen Actors Guild (SAG-AFTRA) and Hollywood’s writers, who have been on the picket lines for 100 days.

    SAG-AFTRA last month warned that studios intend to create realistic digital replicas of performers, to use “for the rest of eternity, in any project they want,” all for the payment of one day’s work.

    The studios dispute this and say they have offered rules including informed consent and compensation.

    But as well as the potential implications for thousands of lost jobs, Bouciegues warns that no matter how good the technology has become, “the audience can still tell” when the wool is being pulled over their eyes by computer-generated VFX.

    Even if AI can perfectly replicate a battle, explosion or crash, it cannot supplant the human element that is vital to any successful action film, he said, pointing to Cruise’s recent “Top Gun” and “Mission Impossible” sequels.

    “He uses real stunt people, and he does real stunts, and you can see it on the screen. For me, I feel like it subconsciously affects the viewer,” said Bouciegues.

    Current AI technology still gives “slightly unpredictable results,” agreed Blomkamp, who began his career in VFX, and directed Oscar-nominated “District 9.”

    “But it’s coming… It’s going to fundamentally change society, let alone Hollywood. The world is going to be different.”

    For stunt workers like Bouciegues, the best outcome now is to blend the use of human performers with VFX and AI to pull off sequences that would be too dangerous with old-fashioned techniques alone.

    “I don’t think this job will ever just cease to be,” said Bouciegues, of stunt work. “It just definitely is going to get smaller and more precise.”

    But even that is a sobering reality for stunt performers who are currently standing on picket lines outside Hollywood studios.

    “Every stunt guy is the alpha male type, and everybody wants to say, ‘Oh, we’re good,’” said Bouciegues.

    “But I personally have spoken to a lot of people that are freaked out and nervous.”

    LOS ANGELES: Hollywood’s striking actors fear that artificial intelligence is coming for their jobs — but for many stunt performers, that dystopian danger is already a reality.

    From “Game of Thrones” to the latest Marvel superhero movies, cost-slashing studios have long used computer-generated background figures to reduce the number of actors needed for battle scenes.

    Now, the rise of AI means cheaper and more powerful techniques are being explored to create highly elaborate action sequences such as car chases and shootouts — without those pesky (and expensive) humans.googletag.cmd.push(function() {googletag.display(‘div-gpt-ad-8052921-2’); });

    “The technology is exponentially getting faster and better,” said Freddy Bouciegues, stunt coordinator for movies like “Free Guy” and “Terminator: Dark Fate.”

    “It’s really a scary time right now.”

    Studios are already requiring stunt and background performers to take part in high-tech 3D “body scans” on set, often without explaining how or when the images will be used.

    Advancements in AI mean these likenesses could be used to create detailed, eerily realistic “digital replicas,” which can perform any action or speak any dialogue its creators wish.

    Bouciegues fears producers could use these virtual avatars to replace “nondescript” stunt performers — such as those playing pedestrians leaping out of the way of a car chase.

    “There could be a world where they said, ‘No, we don’t want to bring these 10 guys in… we’ll just add them in later via effects and AI. Now those guys are out of the job.”

    But according to director Neill Blomkamp, whose new film “Gran Turismo” hits theatres on August 25, even that scenario only scratches the surface.

    The role AI will soon play in generating images from scratch is “hard to compute,” he told AFP.

    “Gran Turismo” primarily uses stunt performers driving real cars on actual racetracks, with some computer-generated effects added on top for one particularly complex and dangerous scene.

    But Blomkamp predicts that, in as soon as six or 12 months, AI will reach a point where it can generate photo-realistic footage like high-speed crashes based on a director’s instructions alone.

    At that point, “you take all of your CG (computer graphics) and VFX (visual effects) computers and throw them out the window, and you get rid of stunts, and you get rid of cameras, and you don’t go to the racetrack,” he told AFP.

    “It’s that different.”

    The lack of guarantees over the future use of AI is one of the major factors at stake in the ongoing strike by the Screen Actors Guild (SAG-AFTRA) and Hollywood’s writers, who have been on the picket lines for 100 days.

    SAG-AFTRA last month warned that studios intend to create realistic digital replicas of performers, to use “for the rest of eternity, in any project they want,” all for the payment of one day’s work.

    The studios dispute this and say they have offered rules including informed consent and compensation.

    But as well as the potential implications for thousands of lost jobs, Bouciegues warns that no matter how good the technology has become, “the audience can still tell” when the wool is being pulled over their eyes by computer-generated VFX.

    Even if AI can perfectly replicate a battle, explosion or crash, it cannot supplant the human element that is vital to any successful action film, he said, pointing to Cruise’s recent “Top Gun” and “Mission Impossible” sequels.

    “He uses real stunt people, and he does real stunts, and you can see it on the screen. For me, I feel like it subconsciously affects the viewer,” said Bouciegues.

    Current AI technology still gives “slightly unpredictable results,” agreed Blomkamp, who began his career in VFX, and directed Oscar-nominated “District 9.”

    “But it’s coming… It’s going to fundamentally change society, let alone Hollywood. The world is going to be different.”

    For stunt workers like Bouciegues, the best outcome now is to blend the use of human performers with VFX and AI to pull off sequences that would be too dangerous with old-fashioned techniques alone.

    “I don’t think this job will ever just cease to be,” said Bouciegues, of stunt work. “It just definitely is going to get smaller and more precise.”

    But even that is a sobering reality for stunt performers who are currently standing on picket lines outside Hollywood studios.

    “Every stunt guy is the alpha male type, and everybody wants to say, ‘Oh, we’re good,’” said Bouciegues.

    “But I personally have spoken to a lot of people that are freaked out and nervous.”

  • Developing AI technology more dangerous than nuclear weapons: Christopher Nolan

    By IANS

    LOS ANGELES: Christopher Nolan, much like other big figures in Hollywood such as James Cameron, Simon Pegg, Tom Cruise, has spoken about the increasing use of artificial intelligence in both movies and in real life, and has spoken greatly about its dangers.

    Releasing his massive biopic ‘Oppenheimer’ in theatres which is currently ruling cinema, as the movie deals with the concept of nuclear weapons, Nolan has said that AI is even more dangerous than nukes.

    As reported by Aceshowbiz, while speaking to ‘The Guardian’, the ‘Interstellar’ director said: “To look at the international control of nuclear weapons and feel that the same principles could be applied to something that doesn’t require massive industrial processes – it’s a bit tricky.”

    He added: “International surveillance of nuclear weapons is possible because nuclear weapons are very difficult to build. Oppenheimer spent $2 billion and used thousands of people across America to build those first bombs. It’s reassuringly difficult to make nuclear weapons and so it’s relatively easy to spot when a country is doing that. I don’t believe any of that applies to AI.”

    Nolan went on to say that the increasingly intimate relationship between AI and weaponry exposes the need for corporate accountability and much scrutiny.

    He further went on to say that the very thought of people producing or using such technology without truly understanding its implications is, “absolutely terrifying … because as AI systems go into the defense infrastructure, ultimately they’ll be in charge of nuclear weapons.”

    During the special screening of ‘Oppenheimer’ back on July 20, the director had spoken to a bunch of scientists working in the field of AI, and they too have questioned their work many times.

    Many of these scientists and researchers have called the developments undertaken in their own department as their own personal ‘Oppenheimer’ moment as they ponder over the possible outcomes of such advancements in AI technology.

    ‘The Dark Knight’ director also went on to say that while the need for global accountability in AI control is becoming increasingly more important with advancing weapon technology as well as systems of control such as in surveillance systems, he said that “the United Nations has become a very diminished force” in controlling it.

    The director also went on to say that upon watching ‘Oppenheimer’, he hoped audiences would better understand the prospects of control regarding weapon systems and artificial intelligence.

    LOS ANGELES: Christopher Nolan, much like other big figures in Hollywood such as James Cameron, Simon Pegg, Tom Cruise, has spoken about the increasing use of artificial intelligence in both movies and in real life, and has spoken greatly about its dangers.

    Releasing his massive biopic ‘Oppenheimer’ in theatres which is currently ruling cinema, as the movie deals with the concept of nuclear weapons, Nolan has said that AI is even more dangerous than nukes.

    As reported by Aceshowbiz, while speaking to ‘The Guardian’, the ‘Interstellar’ director said: “To look at the international control of nuclear weapons and feel that the same principles could be applied to something that doesn’t require massive industrial processes – it’s a bit tricky.”googletag.cmd.push(function() {googletag.display(‘div-gpt-ad-8052921-2’); });

    He added: “International surveillance of nuclear weapons is possible because nuclear weapons are very difficult to build. Oppenheimer spent $2 billion and used thousands of people across America to build those first bombs. It’s reassuringly difficult to make nuclear weapons and so it’s relatively easy to spot when a country is doing that. I don’t believe any of that applies to AI.”

    Nolan went on to say that the increasingly intimate relationship between AI and weaponry exposes the need for corporate accountability and much scrutiny.

    He further went on to say that the very thought of people producing or using such technology without truly understanding its implications is, “absolutely terrifying … because as AI systems go into the defense infrastructure, ultimately they’ll be in charge of nuclear weapons.”

    During the special screening of ‘Oppenheimer’ back on July 20, the director had spoken to a bunch of scientists working in the field of AI, and they too have questioned their work many times.

    Many of these scientists and researchers have called the developments undertaken in their own department as their own personal ‘Oppenheimer’ moment as they ponder over the possible outcomes of such advancements in AI technology.

    ‘The Dark Knight’ director also went on to say that while the need for global accountability in AI control is becoming increasingly more important with advancing weapon technology as well as systems of control such as in surveillance systems, he said that “the United Nations has become a very diminished force” in controlling it.

    The director also went on to say that upon watching ‘Oppenheimer’, he hoped audiences would better understand the prospects of control regarding weapon systems and artificial intelligence.

  • Deploying robots as sentries, deciphering Mandarin into English: Artificial Intelligence to strengthen Indian defence forces

    Express News Service

    NEW DELHI: India’s focus on the new age disruptive technology Artificial Intelligence (AI) will lead the forces soon to have not just robots doing sentry duty but also the soldiers in mine laden fields will have robots marking and warning about the mines.

    Even those soldiers operating along the Northern Border will have an AI device to decipher Mandarin into English for them.

    These three and 72 more such devices and products as Defence Minister Rajnath Singh launched 75 newly-developed Artificial Intelligence (AI) products/technologies during the first-ever ‘AI in Defence’ (AIDef) symposium and exhibition, organised by the Ministry of Defence in New Delhi on Monday.

    These included AI Platform Automation; Autonomous/Unmanned/Robotics systems; BlockChain-based Automation; Command, Control, Communication, Computer & Intelligence, Surveillance & Reconnaissance; Cyber Security; Human Behavioural Analysis; Intelligent Monitoring Systems; Lethal Autonomous Weapon Systems; Logistics and Supply Chain Management, Operational Data Analytics; Manufacturing and Maintenance; Simulators/Test Equipment and speech/voice analysis using Natural Language Processing.

    A startup CogKnit run by Anuroop Iyengar is preparing the voice recognition and Mandarin Translator

    “The device is offline and can recognize voices at a distance of 5ft. It will be helpful during the Border personnel Meetings and also in times of any standoffs for better communication.”

    The work is on to bring the present weight of device to 200 gms from 600 gms and to increase the effective range to 15 ft and more, said

    Coming to the autonomous robot which will be functioning as sentry, Major Paras Kanwar said that it can be put on a metal rail and with its AI application it will be able to recognize a person from far. “The device will challenge a person as it is enabled to differentiate between a friend and a foe.”. It will keep storing the data for future use and can be operated from a distance of more than five kilometers with wireless signals.  

    Major Kanwar also is part of an AI based offensive weapon project where a device will Locate, detect and fire on an enemy. These products have been promoted by the Army Design Bureau and will soon have their mass production.

    Three AI products developed by the DPSUs having dual-use applications and good market potential, namely AI-enabled Voice Transcription/Analysis software developed by Bharat Electronics Limited; Driver Fatigue Monitoring System developed by Bharat Earth Movers Limited and AI-enabled evaluation of Welding defects in X-rays of Non-destructive Testing developed by Garden Reach Shipbuilders & Engineers were screened during the event. These products are expected to open up new business avenues for the Defence PSUs.

    Speaking at the event Rajnath Singh pointed out that AI has built inroads in almost every sector, including defence, health & medicine, agriculture, trade & commerce and transport. He called upon all the defence stakeholders to enhance the jointness of human consciousness and the ability of AI to bring a radical change in the sector.

    “When there has been full human participation in wars, new autonomous weapons/systems have been developed with the help of AI applications. They can destroy enemy establishments without human control. AI-enabled military devices are capable of handling large amounts of data efficiently. It is also proving to be very helpful in training the soldiers. In the coming times, Augmented and Virtual Reality technologies will also be used effectively,” he said.

  • Government to launch AI-driven portal for smooth disbursal of pension

    By Express News Service

    NEW DELHI:  In order to seamlessly process, track and disburse pension, the Department of Pension and Pensioners’ Welfare will soon launch Artificial Intelligence (AI) enabled common single pension portal for the benefit of pensioners and elderly citizens.

    Named Bhavishya, the AI-supported portal will send automatic alerts to pensioners and superannuated elder citizens, Union Minister of State at the PMO, Jitendra Singh said here today. The portal will not only enable constant contact with pensioners and their associations across the country but will also regularly receive their inputs, suggestions as well as grievances for prompt response.

    While Bhavishya has ensured end-to-end digitalisation of pension processing and payment, officials have been advised to conduct pre-retirement workshops at regular intervals to counsel them and learn from their experiences.

    The Bhavishya platform was made mandatory for all central government departments since January 2017 and is now being implemented in the main secretariat from where 97 ministries/departments, including 815 attached offices along with 7,852 drawing and disbursal officers (DDOs) on board. Till date, more than 162,000 cases have been processed or pension payment orders (PPOs) issued. This also includes 96,000 e-PPOs.

  • Centre issues draft norms to mobilise non-personal citizen data available with government

    By PTI

    NEW DELHI: The Ministry of Electronics and IT (MeitY) has issued a draft National Data Governance Framework to mobilise non-personal data of citizens for use by both public and private entities to improve services.

    The draft policy proposes the launch of a non-personal data-based India datasets program and addresses the methods and rules to ensure that non-personal and anonymized data from both government and private entities are safely accessible by the research and innovation ecosystem.

    Minister of State for Electronics and IT Rajeev Chandrasekhar said the National Data Governance Framework is of interest for artificial intelligence (AI) startups, AI research entities and government departments.

    “Its imp piece of policy framework thats being devlopd to catalyze #India’s $1 Trillion #DigitalEconomy,” he tweeted.

    Asking stakeholders to comment on the draft framework, the minister said the National Data Governance Framework will also accelerate digital government and digitisation of government with common standards, rules and guidelines for data storage and management across all departments.

    The draft said during the COVID-19 pandemic, digital governance played a big part in India’s resilient response to the pandemic and its impact on lives, livelihoods and the economy.

    In the post-COVID era, digitisation of government is accelerating faster and data generation is also increasing exponentially which can be used in turn to improve citizens’ experience and engagement with the government and governance as a ‘Digital Nagrik’.

    The digital government data is currently managed, stored and accessed in differing and inconsistent ways across different government entities, thus attenuating the efficacy of data-driven governance, and preventing an innovative ecosystem of data science, analytics and AI from emerging to its full potential, the draft said.

    “The power of this data must be harnessed for more effective Digital Government, public good and innovation, thus requiring a National Data Governance Framework Policy (NDGFP),” draft said.

    The proposed policy will be applicable to all government departments and entities and rules and standards prescribed will cover all data collected and being managed by any government entity.

    It proposed to cover all non-personal datasets and data and platform, rules, standards governing its access and use by researchers and startups.

    “State Governments shall be encouraged to adopt the provisions of the Policy and rules, standards, and protocols as applicable,” the draft said.

    The draft also proposes setting up of an ‘India Data Management Office (IDMO)’, under the Digital India Corporation, which shall be responsible for framing, managing and periodically reviewing and revising the policy.

    “The IDMO shall be responsible for developing rules, standards, and guidelines under this policy that shall be published periodically,” the draft said.

    MeitY has fixed June 11 as the deadline for submission of comments by stakeholders on the draft available on its website.

  • Indian Navy ropes in new-age tech with 30 Artificial Intelligence projects in the works

    Express News Service

    NEW DELHI: Indian Navy has launched major projects and initiatives to incorporate new-age advanced technology into the service at systems and processes levels. Along with the centres of excellence, the navy has begun exposing its personnel to academics and experts from outside, keeping the future in mind.

    Commander Vivek Madhwal, Spokesperson Indian Navy told on Thursday, “Navy is progressing around 30 AI projects and initiatives encompassing Autonomous Systems, Language Translation, Predictive Maintenance, Inventory Management, Text Mining, Perimeter Security, Maritime Domain Awareness and Decision Making.”

    Indian Navy is focused on the incorporation of Artificial Intelligence (AI) and Machine Learning (ML) in critical mission areas. “AI initiatives being steered by the Navy are envisaged to have both tactical and strategic level impact,” added Madhwal.

    The Indian Navy is organising seminars and workshops keeping the capacity building in mind. Navy’s premier technical training institute INS Valsura organised a workshop on the contemporary topic ‘Leveraging Artificial Intelligence (Al) for Indian Navy’ from 19 to 21 Jan 2022. This was conducted under the aegis of Southern Naval Command, prominent speakers from renowned IT Companies like Google, IBM, Infosys and TCS shared the industry perspective during the three-day event.

    Distinguished academicians from IIT Delhi, New York University, and Indian private universities also spoke about the latest trends and applications of Al. The keynote address was delivered by Vice Admiral MA Hampiholi, Flag Officer Commanding in Chief, Southern Naval Command who stressed the strategic importance of this niche technology and its application in the Indian Navy. The webinar conducted saw online participation by over 500 participants from across the country.

    Located at Jamnagar, INS Valsura has already been designated as the Center of Excellence (CoE) in the field of Big Data and a state of art lab on AI and Big Data Analysis (BDA) was set up in Jan 2020.

    Regarding its future endeavour, the Indian Navy in a statement said, “In addition, the Navy is currently in the process of creating a Center of Excellence (CoE) in the field of AI at INS Valsura, which has been instrumental in the progress of pilot projects pertaining to the adoption of AI and BDA in the domain of maintenance, HR and perception assessment, in collaboration with academia and industry.”

    Additionally, the Navy is deeply involved in unifying and reorganising its enterprise data, as data is the fuel for all AI engines, said Navy.

    At the organisational level, the Navy has formed an AI core group that meets twice a year for assessing all AI/ ML initiatives only to keep a tab on timelines. “Periodic reviews of AI projects are being held so as to ensure adherence to the promulgated timelines. The Navy also conducts training in AI/ ML across all levels of speciality for its officers and sailors.” The Navy told.

    This training is held both within the Navy’s own training schools as well as renowned IITs. Several personnel have undergone big and small AI linked courses over the last three years. These initiatives of the Indian Navy are in sync with the country’s vision of making “India the global leader in AI, ensuring responsible and transformational AI for All”.