
Right To Privacy: Kids Are Learning Under Surveillance

Protecting Children’s Right To Privacy In An Era Of AI
When COVID-19 forced schools to close in 2020, educators and parents rushed to adopt digital/EdTech platforms to keep students learning from home. In the years since, researchers and privacy advocates have uncovered the troubling reality that many educational technology companies have been collecting far more student data than necessary, tracking children’s behaviour, building detailed profiles, and in some cases selling information to third parties. What began as an emergency response has evolved into a human-rights-violating surveillance infrastructure embedded in the everyday educational experience of an entire generation.
The rapid integration of AI into classroom environments has fundamentally altered how education operates. School systems increasingly view AI as essential preparation for students’ futures, channeling significant public resources toward these technologies. Governments and private actors increasingly frame AI as essential for preparing students for an “AI future,” often redirecting public funding toward AI initiatives. Yet, as human rights organizations and independent researchers have documented, the rapid deployment of AI in education has frequently occurred without adequate safeguards, exposing children and marginalized learners to serious rights violations.
It’s important to acknowledge the opportunities that AI offers in advancing the right to education and inclusion. AI can support the right to education, recognized in international law and embodied in instruments such as the UN Convention on the Rights of the Child. When designed thoughtfully, AI systems can tailor instruction to meet the needs of diverse learners, help students with disabilities access adaptive content, and assist teachers in identifying learning gaps early. For example, learner-centered AI may provide targeted support for students struggling with particular concepts, helping reduce dropout rates and promoting inclusion. Teachers can leverage AI tools to reduce administrative burdens, freeing up more time for meaningful interaction with students. Studies and policy frameworks, including OECD working papers, highlight that AI can contribute to equity and inclusion when its deployment is accompanied by thoughtful policies addressing access, bias, and transparency.
However, this substantial potential of AI in education must be viewed within the broader context of three critical human rights implications:
- The erosion of children’s right to privacy through systematic surveillance.
- The commercial exploitation of student data.
- The lack of transparency and accountability in how these EdTech systems operate.
In this article…
Privacy, Surveillance, And Data Exploitation
As classrooms digitize, the promise of EdTech meets mounting concern over an unintended byproduct: student surveillance. One of the most well-documented areas of harm is children’s right to privacy. A landmark 2022 investigation by Human Rights Watch (HRW) found that governments across 49 countries endorsed or required EdTech products that systematically surveilled children during online learning. HRW found that 89% (146 out of 164) government-recommended online learning tools engaged in data practices that risked or violated children’s rights. In contrast, HRW also identified a dozen Ed Tech sites from various countries such as France, Germany, Japan, and Argentina that functioned with zero tracking technology. These instances confirm that educational platforms can thrive without compromising user privacy. The determining factor is simply whether organizations choose to prioritize it. The HRW investigation concluded that governments had failed their duty to protect children’s right to privacy, education, and freedom of thought during pandemic platform deployment. This failure occurred despite children’s heightened vulnerability during a global crisis and their increased reliance on digital tools for learning.
EdTech solutions surveilling students track activities outside school hours and transfer data to advertising companies without genuine consent or openness. These products monitor or have the capacity to monitor children, in most cases secretly and without the consent of children or their parents, in many cases harvesting personal data such as who they are, where they are, what they do in the classroom, who their family and friends are, and what kind of device their families could afford for them to use.
The rush toward technological fixes outpaced rights considerations, creating surveillance infrastructure that persists today. From a rights perspective, these practices violate multiple interrelated protections. They undermine fundamental privacy rights, contradict the principle that children’s best interests must guide all decisions affecting them, and compromise the right to education free from exploitation. Pervasive surveillance during formative years normalizes constant monitoring, potentially shaping how young people understand privacy, autonomy, and their relationship with authority in ways that extend far beyond the school walls.
Exploitation Of Student Data By Commercial Actors
In 2022, researchers at Internet Safety Labs found that up to 96% of apps used in U.S. schools share student information with third parties, and 78% of them share this data with advertisers and data brokers. Given that children are a vulnerable group, their data, increasingly including biometric data, should be handled with the highest level of protection. International human rights law places primary responsibility on governments to protect children’s rights, even when technologies are developed and operated by private companies. Yet many EdTech products embed technologies that track children’s online behavior across contexts, collecting detailed information about who they are, where they are, and how they learn, while routinely sharing this data with third parties in the advertising technology ecosystem, often without clear consent or parental awareness. This practice undermines children’s right to privacy, access to information, and freedom of thought, transforming educational environments into spaces of commercial data extraction.
Ad trackers embedded in educational platforms transmit student data to a network of third-party entities including marketing platforms, analytics firms, and data brokers who compile this information into detailed behavioral profiles used for commercial targeting. Children’s learning activities thus generate commodified data streams that fuel advertising ecosystems far removed from educational purposes. A striking example emerged in Brazil where the public online learning platform Estude em Casa in Minas Gerais exposed this troubling intersection of education and commercial surveillance. HRW documented that the website, used by children across the state, was transmitting students’ activity data to a third‑party advertising company through multiple ad trackers, third‑party cookies, and Google Analytics “remarketing audiences.” This meant that children’s learning behaviors were feeding directly into commercial advertising ecosystems, far beyond the intended educational purposes. After Human Rights Watch publicly highlighted these privacy violations in reports issued in late 2022 and early 2023, the Minas Gerais education secretariat removed all ad tracking from the platform in March 2023, underscoring the urgent need for stronger safeguards to protect children’s right to digital privacy.
Lack Of Transparency And Accountability
AI has moved far beyond being supplementary in education and it now operates throughout all levels of school systems. Proponents justify this expansion through appeals to efficiency, safety, and individualized learning. Human rights concerns arise when these systems become compulsory, function without transparency, demand extensive data gathering, and demonstrate unreliable performance, especially when applied to young people who cannot meaningfully consent to their use.
A December 2025 high-profile enforcement action in the United States illustrates how deeply a lack of transparency and accountability by EdTech companies can violate the rights of children. After a 2021 cyberattack exposed the personal information of more than 10 million students, including grades, health details, and other sensitive records, federal and state regulators finally took action against the education technology provider “Illuminate Education.” The Federal Trade Commission and attorneys general in California, Connecticut, and New York found that the company misled school districts about its cybersecurity safeguards, failed to fix known vulnerabilities, and delayed notifying schools and families about the breach. The resulting settlement requires stronger security measures and deletion of unneeded data, and imposes $5.1 million in penalties. Yet the settlement offered little meaningful remedy for affected students and families, showing how enforcement actions often arrive only after harm has occurred and how commercial actors are permitted to amass vast troves of student data while externalizing the consequences of failure onto children, parents, and public institutions.
Moving Forward: Building Rights-Based AI-Powered EdTech Systems
In 2026, as the integration of AI into education continues to accelerate, the need for comprehensive governance frameworks that uphold human rights has never been more urgent. AI in education need not be incompatible with human rights principles, but current practices demonstrably are.
Aligning AI deployment in education with human rights standards requires fundamental reforms in both governments and the private sector. International organizations are actively shaping guidance for responsible AI use. As part of UNICEF’s AI for Children project, its 2025 Guidance on AI and Children sets out ten requirements for “child-centered AI,” including regulatory oversight, data privacy, nondiscrimination, safety, transparency, accountability, and inclusion. These principles aim to ensure that AI systems uphold children’s rights and that technology must be designed and governed to protect and benefit learners. These safeguards are essential for fulfilling states’ and private sector obligations under international children’s rights and education law.
A rights based approach demands a reorientation of priorities. Rather than casually experimenting on children by implementing unevidenced technologies in their classrooms, we must ask what children need and what protections their rights require. Innovation must be evaluated not by technical sophistication or efficiency promises, but by demonstrated capacity to enhance educational quality while respecting children’s rights and dignity. Without this shift, AI risks becoming not an instrument of educational empowerment but a mechanism whose harms will fall most heavily on children already most vulnerable and marginalized within education systems. For those of us who believe that children’s rights are fundamental, we must boldly challenge the claims for AI’s “potential,” and we must demand concrete evidence and robust, rights-based regulation to both to shape how these systems are developed (ensuring they’re ethical, effective, and respectful of children’s rights) and to address the risks we already know about, along with those still emerging.
Further Reading:
Source link



