Artificial intelligence is revolutionizing learning in schools, but at what cost to student privacy? The fusion of AI tools in education has ignited global alarms, particularly regarding the mishandling of children’s sensitive information.
United Nations agencies, from UNHCR to UNESCO, united in a statement urging ironclad protections for data collected via AI platforms used by minors.
The Texas-based PowerSchool scandal exemplifies the stakes: a 2025 breach compromised data of 60 million students and a million educators, leaking highly personal identifiers.
India mirrors this vulnerability. Educational bodies here faced a staggering 200,000-plus cyber assaults and 400,000 data leaks within nine months, per a recent study.
Against this backdrop, Pratham’s alliance with Anthropic introduces the Claude-powered ATM. This tool scans student handwriting, crafts tailored assessments, scores responses via rubrics, and provides customized Hindi-English feedback.
India’s DPDP Act throws a spotlight on potential pitfalls. It requires parental guardians’ verified consent for processing minors’ data, with draft rules outlining OTP and ID-linked processes.
Yet, the ATM’s operations—capturing images of kids’ work, beaming them to American servers for AI dissection—might bypass true parental awareness. This gap threatens compliance and child safety alike.
Stakeholders must prioritize ethical AI deployment. Enhanced transparency, localized data storage, and rigorous audits are essential to fortify defenses in the edtech arena.