Tag: deepfakes

  • Tesla And SpaceX CEO Elon Musk's X Cracks Down On Deepfakes With Improved Image Matching Update

    Shallowfakes are photos, videos and voice clips generated without the help of artificial intelligence (AI), and use widely available editing and software tools.

  • Misinformation Spread Via Deepfakes Biggest Threat To Upcoming Polls In India: Tenable |

    New Delhi: Misinformation and disinformation spread through artificial intelligence (AI)-generated deepfakes and fake content are the biggest threats to the upcoming elections in India,” exposure management company Tenable said on Sunday.

    According to the company, these threats will be shared across social media and messaging platforms like WhatsApp, X (formerly Twitter), Instagram, and others.

    “The biggest threats to the 2024 Lok Sabha elections are misinformation and disinformation as part of influence operations conducted by malicious actors against the electorate,” said Satnam Narang, Senior Staff Research Engineer at Tenable, to IANS.

    A recent report by Tidal Cyber highlighted that this year, 10 countries will face the highest levels of election cyber interference threats, including India.

    Recently, deepfake videos of former US President Bill Clinton and current President Joe Biden were fabricated and circulated to confuse citizens during the upcoming presidential elections. (Also Read: Woman Falls Victim To Investment Scam, Loses Jewelry And Over Rs 24 Lakh)

    Experts note that the proliferation of deepfake content surged in late 2017, with over 7,900 videos online. By early 2019, this number nearly doubled to 14,678, and the trend continues to escalate.

    “With the increase in generative AI tools and their use growing worldwide, we may see deepfakes, be it in images or video content, impersonating notable candidates seeking to retain their seats or those hoping to unseat incumbents in parliament,” Narang added.

    The Indian government has recently issued directives to social media platforms such as X and Meta (formerly Facebook), urging them to regulate the proliferation of AI-generated deepfake content.

    Additionally, ahead of the Lok Sabha elections, the Ministry of Electronics & IT (MeitY) has issued an advisory to these platforms to remove AI-generated deepfakes from their platforms. (Also Read: WhatsApp Allows To Pin Multiple Messages In Chat; Here’s How to Pin Messages on Android, iOS, And Desktop)

    Tenable suggests that the easiest way to identify a deepfake image is to look for nonsensical text or language that looks almost alien-like in language. 

  • EU AI Act: Regulations Impacting ChatGPT, Google Gemini, And Deepfakes-All You Need To Know |

    New Delhi: The European Parliament has just given its green light to the World’s first comprehensive AI Law -the EU AI Act, that will govern how artificial intelligence (AI) is used across the continent. These rules are designed to ensure that humans remain in control of this powerful technology and that it serves the best interests of humanity.

    Interestingly, it took a whopping five years for these rules to pass through the EU Parliament, indicating the thoroughness and significance of the regulations.

    Scope of the Regulations: Which AI Systems Are Covered?

    Under the EU AI Act, these regulations will have a broad impact, affecting AI systems such as OpenAI’s ChatGPT and Google’s Gemini, among others. Essentially, any machine-based system operating with some level of autonomy and producing output based on data and inputs, whether from machines or humans, will fall under the purview of these rules. Moreover, companies developing AI for general use, like Google and OpenAI, will need to adhere to EU copyright law during the training of their systems.

    Risk-Based Approach: Categories and Scrutiny Levels

    A key feature of the EU’s AI Act is its risk-based approach. It categorizes AI systems into four risk categories: unacceptable risk, high risk, general purpose and generative AI, and limited risk. The level of scrutiny and requirements placed on AI systems will vary depending on their risk category.

    For instance, higher-risk AI models, such as ChatGPT 4 and Google’s Gemini, will face additional scrutiny due to their potential to cause significant accidents or be misused for cyber attacks. Companies developing such AI systems will be required to provide clear information to users and maintain high-quality data on their products.

    Regulations for High-Risk AI Systems

    The Act also prohibits certain high-risk applications of AI, including the use of AI-powered technology by law enforcement to identify individuals, except in very serious cases. Predictive AI systems aimed at forecasting future crimes are also banned, as are systems designed to track the emotions of students or employees.

    Prohibited Applications and Ethical Considerations

    Another important provision of the Act mandates the labelling of deepfakes—manipulated images, videos, or audio—to prevent the spread of disinformation. Moreover, companies developing AI, like OpenAI and Meta, will be compelled to disclose previously undisclosed details about their products.

    In light of recent events, Google has taken steps to restrict its Gemini chatbot from discussing elections in countries holding elections this year, aiming to mitigate the risk of spreading misinformation.

    Implications and Timeline

    These regulations signify a significant milestone in guaranteeing the responsible advancement and utilization of AI technology within the European Union. Scheduled to take effect from May 2025, they herald a fresh era of AI governance focused on safeguarding both individuals and society at large. 

    These regulations mark a significant step in ensuring the responsible development and use of AI technology within the European Union. They are set to come into force starting in May 2025, and mark the start of a new era of AI governance aimed at safeguarding both individuals and society as a whole.

  • Researchers at IIT Ropar develop imposter-detecting tool, ‘FakeBusters’

    Express News Service
    CHANDIGARH: The Indian Institute of Technology, Ropar along with Australia-based Monash University has developed a unique detector to find out if imposters are attending virtual conferences in the new normal amid the Covid pandemic.

    ‘FakeBuster’ helps find faces manipulated on social media to defame or make jokes and even enables users to detect if a video of another person is spoofed during a video meet. 

    The tool is independent of video conferencing solutions and has been tested on Zoom an Skype applications. 

    A paper on this technique titled, FakeBuster: A Deepfakes detection tool for video conferencing scenarios, was presented at the 26th International Conference on Intelligent User Interfaces in USA.

    The application has been developed by a four-member team of researchers. 

    “Sophisticated artificial intelligence techniques have spurred a dramatic increase in manipulation of media contents. Such techniques keep evolving and become more realistic making detection all the more difficult,” Dr Abhinav Dhall, one of the team members said. 

    He further added that the usage of manipulated media content in spreading fake news, pornography and other such online content has been widely observed with major repercussions. He said such manipulations have recently found their way into video-calling platforms through spoofing tools based on transfer of facial expressions. These fake facial expressions are often convincing to human eye and can have serious implications. These real time mimicked visuals (videos) known as `Deepfakes’ can even be used during online examinations and job interviews.

    “The Deepfake detection tool ‘FakeBuster’ works in both online and offline modes. Since the device can presently be attached with laptops and desktops only, we are aiming to make the network smaller and lighter to enable it to run on mobile phones and devices as well,’’ says Prof Ramanathan Subramanian and added that the team is working on using the device to detect fake audios also.

    Besides Dhall and Subramanian the other team members are two students Vineet Mehta and Parul Gupta. 

    The team claims that ‘FakeBuster’ is one of the first tools to detect imposters during video conferencing using DeepFake detection technology. The device has already been tested and would hit the market soon.