In a move to combat digital deception, India’s IT Ministry has rolled out stringent new guidelines for AI deepfakes on social media. Experts are applauding the refinements, which provide platforms with precise instructions to curb misleading content effectively.
Gone are the broad mandates for labeling all AI outputs. The revised IT Rules, 2021, now hone in on harmful, deceptive deepfakes circulating on platforms such as Meta’s Facebook and Instagram, and Google-owned YouTube. Content creators must embed disclosures—via on-screen labels or digital watermarks—revealing AI origins.
This enables swift oversight by authorities, ensuring users distinguish real from fabricated media. The policy aims to foster media literacy amid the AI boom, preventing manipulation in sensitive areas like politics and public discourse.
‘It’s a pragmatic evolution,’ noted legal expert Sajai Singh from JSA Advocates. ‘Focusing on deceptive content relieves platforms from labeling everything, making compliance feasible while protecting users.’
Notable updates slash the response time for removing flagged deepfakes to just 3 hours from the previous 36. Labels become permanent, tamper-proof, and platforms must invest in AI detection systems for unlawful content.
As deepfake incidents surge globally—from fake celebrity videos to election interference—India’s proactive stance positions it as a leader in AI governance. These rules not only safeguard democracy but also encourage ethical AI development, signaling a new era of accountable online ecosystems.