Social media platform X is tightening the screws on deceptive content amid ongoing US-Israel-Iran conflicts. The company vows to suspend earnings for creators posting AI-generated videos of battles or armed clashes without explicit warnings.
With deepfakes proliferating during times of war, the risk of public confusion has skyrocketed. These convincingly lifelike clips can overshadow genuine reporting, sowing doubt about real-world developments and eroding trust in digital information.
Product lead Nikita Bier laid out the penalties: a 90-day revenue share blackout for initial breaches, escalating to lifetime bans for persistent violators. In his statement, Bier highlighted how modern AI tools make forgery ‘incredibly simple,’ stressing the need for transparency to deliver reliable wartime updates.
Detection will blend AI-powered scanners capable of spotting generative media with crowd-sourced Community Notes. This user-driven verification system has become central to X’s moderation strategy since embracing a more open review model.
X’s creator earnings scheme rewards high-engagement posts with ad cuts, a model some experts slam for fueling controversy-driven content. Critics point to loose guardrails and subscription-based access as enablers of misinformation spread.
Limited to armed conflict visuals for the moment, the policy signals X’s proactive stance but stops short of tackling AI fakes in elections or commerce. As synthetic content evolves, expect platforms to refine defenses against digital deception.