The first case involves an AI-generated image of a nude woman posted on Instagram, resembling a public figure from India.
Tag: AI
-
Higgsfield AI Unveils Image To Video Generator App: Check How It Works |
New Delhi: Higgsfield AI, a video AI company, has recently launched its first artificial intelligence (AI)-powered app for smartphones, named Diffuse. This mobile application serves as an image-to-video generator. As per the details, it is capable of transforming a selfie into a lifelike character within a video.
What Is Diffuse?
Diffuse, the brainchild of Higgsfield AI, launched to change video content creation by offering users the ability to seamlessly integrate themselves into videos. (Also Read: Customer Gets Delayed Food Delivery On Tuesday, Swiggy Blames ‘Weekend Peak Hour’, Chat Goes Viral)
How Diffuse Works?
The app utilizes advanced AI algorithms to generate personalized characters with lifelike motion, all from a single selfie. (Also Read: Security Alert For Android Users! Indian Govt Issued High-Risk Warning: Read More)
Diffuse: Availability
Initially, Diffuse is being gradually introduced to select markets, with availability on both Android and iOS platforms. Users in regions such as India, South Africa, the Philippines, Canada, and countries in Central Asia can access the app as it is gradually rolled out.
Diffuse: Features
With Diffuse, users have the flexibility to choose from a library of video content or create personalized videos from scratch using text, images, or existing video clips.
AI Technology Behind Diffuse
Higgsfield AI is committed to developing cutting-edge AI technology to power Diffuse and future endeavors. The company’s foundational model, built entirely from scratch, employs transformer architectures similar to those used by OpenAI’s ChatGPT. Additionally, Higgsfield AI has leveraged proprietary frameworks developed in-house to efficiently train its AI model on limited GPU resources.
Future Plans
While Diffuse is currently available in preview mode, offering 2-second video generation, Higgsfield AI aims to enhance its capabilities further. The company’s ultimate goal is to achieve realistic, detailed, and fluid video generation directly on mobile devices. Although the full version release date remains undisclosed, Higgsfield AI continues to work towards refining its AI technology for public release.
-
Misinformation Spread Via Deepfakes Biggest Threat To Upcoming Polls In India: Tenable |
New Delhi: Misinformation and disinformation spread through artificial intelligence (AI)-generated deepfakes and fake content are the biggest threats to the upcoming elections in India,” exposure management company Tenable said on Sunday.
According to the company, these threats will be shared across social media and messaging platforms like WhatsApp, X (formerly Twitter), Instagram, and others.
“The biggest threats to the 2024 Lok Sabha elections are misinformation and disinformation as part of influence operations conducted by malicious actors against the electorate,” said Satnam Narang, Senior Staff Research Engineer at Tenable, to IANS.
A recent report by Tidal Cyber highlighted that this year, 10 countries will face the highest levels of election cyber interference threats, including India.
Recently, deepfake videos of former US President Bill Clinton and current President Joe Biden were fabricated and circulated to confuse citizens during the upcoming presidential elections. (Also Read: Woman Falls Victim To Investment Scam, Loses Jewelry And Over Rs 24 Lakh)
Experts note that the proliferation of deepfake content surged in late 2017, with over 7,900 videos online. By early 2019, this number nearly doubled to 14,678, and the trend continues to escalate.
“With the increase in generative AI tools and their use growing worldwide, we may see deepfakes, be it in images or video content, impersonating notable candidates seeking to retain their seats or those hoping to unseat incumbents in parliament,” Narang added.
The Indian government has recently issued directives to social media platforms such as X and Meta (formerly Facebook), urging them to regulate the proliferation of AI-generated deepfake content.
Additionally, ahead of the Lok Sabha elections, the Ministry of Electronics & IT (MeitY) has issued an advisory to these platforms to remove AI-generated deepfakes from their platforms. (Also Read: WhatsApp Allows To Pin Multiple Messages In Chat; Here’s How to Pin Messages on Android, iOS, And Desktop)
Tenable suggests that the easiest way to identify a deepfake image is to look for nonsensical text or language that looks almost alien-like in language.
-
Misinformation Combat Alliance To Launch WhatsApp Tipline On Mar 25 To Combat Deepfakes |
New Delhi: Ahead of general elections, the Misinformation Combat Alliance (MCA) is launching a Deepfakes Analysis Unit (DAU) tipline on March 25 to help detect and respond to AI-generated synthetic media, a release said on Thursday.
The move assumes significance as the countdown to elections in India has seen the industry scale up election integrity efforts, particularly to crack down on deepfakes and AI (Artificial Intelligence) deceptions. The general elections are scheduled to be held in seven phases starting from April 19, and results will be declared on June 4. (Also Read: Vivo T3 5G With Triple Camera Setup Launched In India; Check Specs, Price And Launch Offers)
“MCA’s Deepfakes Analysis Unit launches WhatsApp tipline in collaboration with Meta to detect deepfakes,” the release said. With the latest initiative, any member of the public can forward audio notes and videos to +91 9999025044 on WhatsApp to get assessments on whether a piece of media is AI-generated or contains elements of it, the release said. (Also Read: OnePlus 12R New Storage Variant Launched In India; Check Price, Bank Offers And Availability)
The tipline will offer support in English, Hindi, Tamil and Telugu, the release said. The launch of the WhatsApp tipline is in line with Meta’s efforts to collaborate with industry stakeholders in the fact-checking ecosystem to build instruments that help curb the spread of AI-generated misinformation.
The MCA is a cross-industry alliance that brings together companies, organisations, institutions, industry associations and entities to collectively fight misinformation and its impact. Currently, MCA has 16 members, including fact-checking organisations, media outlets, and civic tech organisations, and is inviting strategic partners to collaborate on this industry-wide initiative to combat misinformation.
The DAU has been set up with the aim of providing the public with a trusted resource that will help them differentiate between real and synthetic media. The initiative will tap into expertise from a network of partners – academicians, researchers, startups, tech platforms, and fact-checkers – to verify and assess the media content submitted to the tipline.
The assessment reports will be sent back to users in response to their messages. The reports will also be available on the DAU website and DAU’s recently launched WhatsApp Channel which will serve as an authoritative source for people to receive verified and accurate information, according to the release. DAU’s partners include member fact-checking organisations as well as industry partners and digital labs that will help assess and verify the content.
-
EU AI Act: Regulations Impacting ChatGPT, Google Gemini, And Deepfakes-All You Need To Know |
New Delhi: The European Parliament has just given its green light to the World’s first comprehensive AI Law -the EU AI Act, that will govern how artificial intelligence (AI) is used across the continent. These rules are designed to ensure that humans remain in control of this powerful technology and that it serves the best interests of humanity.
Interestingly, it took a whopping five years for these rules to pass through the EU Parliament, indicating the thoroughness and significance of the regulations.
Scope of the Regulations: Which AI Systems Are Covered?
Under the EU AI Act, these regulations will have a broad impact, affecting AI systems such as OpenAI’s ChatGPT and Google’s Gemini, among others. Essentially, any machine-based system operating with some level of autonomy and producing output based on data and inputs, whether from machines or humans, will fall under the purview of these rules. Moreover, companies developing AI for general use, like Google and OpenAI, will need to adhere to EU copyright law during the training of their systems.
Risk-Based Approach: Categories and Scrutiny Levels
A key feature of the EU’s AI Act is its risk-based approach. It categorizes AI systems into four risk categories: unacceptable risk, high risk, general purpose and generative AI, and limited risk. The level of scrutiny and requirements placed on AI systems will vary depending on their risk category.
For instance, higher-risk AI models, such as ChatGPT 4 and Google’s Gemini, will face additional scrutiny due to their potential to cause significant accidents or be misused for cyber attacks. Companies developing such AI systems will be required to provide clear information to users and maintain high-quality data on their products.
Regulations for High-Risk AI Systems
The Act also prohibits certain high-risk applications of AI, including the use of AI-powered technology by law enforcement to identify individuals, except in very serious cases. Predictive AI systems aimed at forecasting future crimes are also banned, as are systems designed to track the emotions of students or employees.
Prohibited Applications and Ethical Considerations
Another important provision of the Act mandates the labelling of deepfakes—manipulated images, videos, or audio—to prevent the spread of disinformation. Moreover, companies developing AI, like OpenAI and Meta, will be compelled to disclose previously undisclosed details about their products.
In light of recent events, Google has taken steps to restrict its Gemini chatbot from discussing elections in countries holding elections this year, aiming to mitigate the risk of spreading misinformation.
Implications and Timeline
These regulations signify a significant milestone in guaranteeing the responsible advancement and utilization of AI technology within the European Union. Scheduled to take effect from May 2025, they herald a fresh era of AI governance focused on safeguarding both individuals and society at large.
These regulations mark a significant step in ensuring the responsible development and use of AI technology within the European Union. They are set to come into force starting in May 2025, and mark the start of a new era of AI governance aimed at safeguarding both individuals and society as a whole.
-
Devin: World’s First AI Software Engineer Launched; Can Solve All Tasks With a Single Prompt |
New Delhi: Cognition, a US-based company, has unveiled Devin and announced the launch of a new artificial intelligence chatbot called Devin. This AI chatbot can write, code, and create using a single prompt.
The company, a startup backed by Peter Thiel’s Founders Fund, claims that this AI chatbot is the world’s first fully autonomous AI software engineer. Unlike other AI counterparts, it stands out because it doesn’t just provide coding suggestions or autocomplete tasks. Moreover, it can take on and complete an entire software project independently.
Cognition calls it a “tireless, skilled teammate” that is “equally ready to build alongside you or independently complete tasks for you to review.” Additionally, the company stated that “with Devin, engineers can focus on more interesting problems, and engineering teams can strive for more ambitious goals.”
Devin sends the code to the user to test out pic.twitter.com/Ko1oTqRXzm
— Cognition (@cognition_labs) March 12, 2024Described as a “tireless, skilled teammate” by Cognition, Devin stands out for its ability to operate autonomously, offering both collaboration and independent task completion. According to Cognition, Devin empowers engineers to tackle more engaging challenges while enabling engineering teams to pursue ambitious objectives. (Also Read: POCO X6 Neo 5G With 5000mAh Battery Launched In India At Rs 15,999; Check Specs And Other Features)
Notably, the initial access to Devin has been limited, with only a select few having the opportunity to utilize its capabilities. People who’ve tested it say Devin is really good at coding. It’s even better and is exceptional and surpasses even the most advanced LLMs currently available, such as GPT-4 and Gemini. Some folks who tried it early said they could make whole websites and basic games in only 5-10 minutes using Devin.
Today we’re excited to introduce Devin, the first AI software engineer.
Devin is the new state-of-the-art on the SWE-Bench coding benchmark, has successfully passed practical engineering interviews from leading AI companies, and has even completed real jobs on Upwork.
Devin is… pic.twitter.com/ladBicxEat
— Cognition (@cognition_labs) March 12, 2024In testing, Devin demonstrated remarkable performance, solving 13.86% of open issues on Github, far surpassing other AI-powered assistants like Anthropic’s Claude (4.8%) and GPT-4 (1.8%).
Cognition AI boasts notable investors, including PayPal co-founder Peter Thiel, who has injected $22 million into the company. The company’s founders, Scott Wu (CEO), Steven Hao (CTO), and Walden Yan (CPO), have a track record of success in international coding competitions, amassing 10 gold medals in top-tier events since their teenage years. (Also Read: AI Will Be Smarter Than Any Single Human By Next Year, Says Space X and Tesla CEO Elon Musk)
This AI tool does not come with the intention to replace human engineers, it is designed to work hand-in-hand with them.
-
India AI Mission: Check 8 Key Components Of Cabinet’s Newly Launched Plan |
New Delhi: Yesterday, March 8, 2024, the union cabinet under the leadership of Minister Piyush Goyal introduced the India AI mission, allocating Rs 10, 371.92 crore for its implementation. The mission is launched to boost AI development in the country for the next 5 years. For ease, we have decoded the key components of India’s AI mission.
IndiaAI Mission: How It Will Be Implemented?
According to the union minister Piyush Goyal’s briefing, the implementation of the mission will be executed by IBD and DIC. (BIG Bonanza To Ujjwala Beneficiaries; Govt Extends Rs 300 LPG Subsidy)
1. Compute Capacity
Speaking to the reporters after the cabinet meeting, Union Minister Piyush Goyal said, one of the primary focuses of the AI mission is to establish an ecosystem equipped with AI compute infrastructure, comprising 10,000 or more graphics processing units (GPUs). (Also Read: Good News For Job Seekers! Elon Musk’s Firm X Has Over 1 Million Openings)
The government has planned to develop this infrastructure through public-private partnerships.
2. AI Marketplace
The government plans to create an AI marketplace, offering AI as a service and pre-trained models to AI innovators. This initiative aims to facilitate access to AI resources for developers and entrepreneurs.
3. Startup Financing
Deep-tech AI startups will receive support under this initiative. This will boost the growth of innovative ventures in the AI space.
4. Non-Personal Dataset Platform
The India AI Datasets Platform will streamline access to non-personal datasets for entrepreneurs and startups. A unified data platform will be developed to provide seamless access to these datasets, aiding Indian startups and researchers in their endeavors.
5. Innovation Centre
The India AI Innovation Centre will focus on developing and deploying indigenous large multimodal models and domain-specific foundational models in critical sectors, contributing to technological advancements.
6. AI Application in Critical Sectors
Through the IndiaAI Application Development Initiative, AI applications will be promoted in critical sectors, addressing problem statements sourced from the government. This initiative aims to leverage AI to enhance efficiency and address challenges in various sectors.
7. IndiaAI FutureSkills
Under the FutureSkills program, entry into AI programs and courses at undergraduate, master’s, and Ph.D. levels will be encouraged. Additionally, data and AI Labs are planned to be set up in Tier 2 and Tier 3 cities across India to provide foundational-level courses.
8. Responsible AI
To ensure the responsible development and deployment of AI, projects will be initiated to develop indigenous tools and frameworks. Self-assessment checklists for innovators and other guidelines and governance frameworks will also be established.
-
Elon Musk Sues OpenAI And CEO Sam Altman Over Agreement Breach |
New Delhi: Elon Musk, the CEO of Tesla and SpaceX, has sued OpenAI and its CEO, Sam Altman, accusing them of violating their initial contractual agreements concerning artificial intelligence (AI). Filed in a San Francisco court in the United States, the lawsuit centers on OpenAI’s recent development of the GPT-4 natural language model.
The owner of company X has accused OpenAI and Microsoft of improperly licensing GPT-4. This is despite an agreement that the artificial general intelligence capabilities would be non-profit and aimed at serving humanity. (Also Read: Google Removes Some India Matrimony Apps, Executive Calls Move ‘Dark Day’)
“Musk has long recognised that AGI poses a grave threat to humanity — perhaps the greatest existential threat we face today,” read the lawsuit. (Also Read: UK Woman Discovers Baby’s Rare Eye Cancer Using Phone Flash; Read The Full Story)
In Musk’s lawsuit, he outlines grievances including breach of contract, violation of fiduciary duty, and unfair business practices. Musk served as a founding board member of OpenAI until 2018.
According to the lawsuit, OpenAI’s initial research was performed in the “open, providing free and public access to designs, models, and code”.
When OpenAI researchers discovered that an algorithm called “Transformers,” initially invented by Google, could perform many natural language tasks without any explicit training, “entire communities sprung up to enhance and extend the models released by OpenAI”.
Altman became OpenAI CEO in 2019. On September 22, 2020, OpenAI entered into an agreement with Microsoft, exclusively licensing to Microsoft its Generative PreTrained Transformer (GPT)-3 language model.
“Most critically, the Microsoft license only applied to OpenAI’s pre-AGI technology. Microsoft obtained no rights to AGI. And it was up to OpenAI’s non-profit Board, not Microsoft, to determine when OpenAI attained AGI,” the lawsuit further read.
Musk said that this case is filed to compel OpenAI to “adhere to the Founding Agreement and return to its mission to develop AGI for the benefit of humanity, not to personally benefit the individual defendants and the largest technology company in the world”. (With Inputs From IANS)