
Meet the Students Resisting the Dark Side of AI
For Christianna Thomas, a senior at Heights High School in Texas, an artificial intelligence policy once stymied an attempt to learn.
Thomas is in her school’s International Baccalaureate program, which uses an AI detector to check for plagiarism. “We use AI to check for other types of AI,” Thomas says.
But at the school, AI also sifts information.
When trying to research what the education system was like in Communist Cuba during the Cold War for a history project, Thomas noticed she could not access the materials. Her school’s web filter kept blocking her, both on her school computer and, when she was on campus, on her personal laptop, too.
Schools often use AI for web filtering, in an effort to prevent students from accessing unsafe materials, but some students worry that it also prevents them from discovering useful information. The technology also seems to snag vital tools, students say: The Trevor Project, which offers a hotline for suicidal teens, can get caught by chatbot bans because it has a chat feature that connects students to a certified counselor; JSTOR, a database that contains millions of scholarly articles, can become banned because it contains some sexually explicit articles; and The Internet Archive, often used by students as a free way to access information, gets banned as well.
For Thomas, this deployment of AI meant she couldn’t research the topic she found compelling. She had to change her focus for the assignment, she says.
Educator concerns about AI have received plenty of attention. Less widely understood is the fact that many students have their own worries about the ways artificial intelligence is now shaping their learning.
In giving schools guidance on the topic, state policies have so far ignored the most obvious civil rights concern raised by this technology, some argue: police surveillance of students. In a time when students are fearful of a federal government that’s clamping down on immigrants, targeting students for their political opinions and enabling the banning of books, some worry about the role of enhanced invigilation using AI tools, which can increase the frequency of student interactions with police and other law enforcement.
This concerns students — along with related worries they have about accusations of cheating and deepfakes — but they are not entirely dismissive of the technology, several teens told EdSurge. Yet in a debate that often unfolds around them, rather than with them, students feel their voices should be amplified.
The Unblinking Eye
Schools sometimes rely on AI to scan students’ online activities and to assess risk, flagging when an educator or other adult needs to step in. Some studies have suggested that the surveillance is “heavy-handed,” with nearly all edtech companies reporting that they monitor students both at and outside of school.
It can also be hard to parse how all the information that’s collected is used. For instance, the Knight First Amendment Institute at Columbia University filed a lawsuit against Grapevine-Colleyville Independent School District in Texas earlier this year. The lawsuit came after the school district declined to disclose information from a public information request the Knight Institute had filed about how the district was using the information it gathered from surveilling students on school-issued devices.
But students have been arrested, including a 13-year-old in Tennessee who was strip-searched after an arrest she claimed came after scans misinterpreted a joke in a private chat linked to her school email account. The school uses the monitoring service Gaggle to scan student messages and content to detect threats, according to legal documents. Reportorial analysis has alleged that these systems are prone to false positives, flagging many innocuous comments and images, and student journalists in Kansas have lodged a lawsuit claiming that their use is a violation of constitutional rights.
Students have started pushing back against all this. For example, Thomas works with Students Engaged in Advancing Texas, a nonprofit that seeks to bring students into policymaking by training them on how to speak at school and mobilize around topics they care about, such as book bans and how schools interact with immigration enforcement, Thomas says.
She helps other students organize around issues like web filtering. The practice is sometimes troubling because it’s unclear if humans are reviewing these processes, she says. When Thomas asked a district near her school with stricter rules for a list of banned websites, the IT staff told her it’s “physically impossible.” In some ways, that makes sense, she says, as the list would be “super duper long.” But it also leaves her with no way to verify that there’s an actual human being overseeing these decisions.
There’s also a lobbying component.
Students Engaged in Advancing Texas has lobbied for Texas House Bill 1773, which would create nonvoting student trustee positions on school boards in the state. The group saw some success in challenging Texas rules that tried to shield students from “obscene content,” contained in a bill that the group alleged limited their speech by limiting their access to social media platforms. These days, the group is also advancing a “Student Bill of Rights” in the state, seeking guarantees of freedom of expression, support for health and well-being and student agency in education decisions.
Thomas says she didn’t personally lobby for the school boards bill, but she assisted with the lawsuit and the Student Bill of Rights.
Other organizations also have looked to students to lead change.
Fake Images, Real Trauma
Until she graduated high school last year, Deeksha Vaidyanathan was leader of the California chapter of Encode, a student-led advocacy organization.
Early in her sophomore year, Vaidyanathan argued at California Speech and Debate Championships over banning biometric technology. In her research over police use of the technology, some of Encode’s work as an organization focused on ethics in AI cropped up. “So that kind of sparked my interest,” she says.
She’d already been introduced to Encode by a friend, but after the competition, she joined up and spent the rest of her high school career working with the organization.
Founded in 2020 by Sneha Revanur — once called the “Greta Thunberg of AI” — Encode supports grassroots youth activism around the country, and indeed the world, on AI. In her role helming the California chapter of that organization, and in independent projects inspired by her time with Encode, Vaidyanathan has worked on research projects trying to discern how police use predictive systems like facial recognition to track down criminals. She’s also strived to pass policies in her local school district about using AI ethically in the classroom and limiting the harm caused by deepfakes.
For her, the work was also close to home.
Vaidyanathan noticed that her school, Dublin High School, in California’s East Bay, had disparate policies about AI use. Some teachers allowed students to use it, and others banned it, relying on surveillance tools like Bark, Gaggle and GoGuardian to catch and punish students who were cheating. Vaidyanathan felt a better approach would be to consistently regulate how the technology is used to ensure it’s done ethically on assignments. She worked with the district’s chief technology officer, and together they surveyed students and teachers and put together a policy over a six-month period. It eventually passed. No other school within a 100-mile radius had passed a policy like this before, according to Vaidyanathan. But it provided a framework for these regulations, inspiring attempts to put similar policies in Indiana, Philadelphia and Texas, she adds.
So now a college student about to attend the University of California, Berkeley, Vaidyanathan is eager to continue working with the organization.
“Most areas of AI control in the classroom are probably neglected,” Vaidyanathan says.
But the largest of these is deepfakes. Young girls in schools around the country are being targeted by fake, sexually explicit likenesses of themselves created using AI. So-called “nudify” apps can take a single photo and spin out a convincing fake, leading to trauma.
It’s a common practice, according to surveys of students.
Plus, in a review of what guidance states give schools released earlier this year, the Center for Democracy & Technology identified that as a notable weak area, meaning that schools aren’t receiving significant counsel from states about how to handle these thorny issues.
Moreover, even guidelines that Vaidyanathan considers effective — such as California’s or Oregon’s — aren’t official policies and therefore don’t have to be enacted in classrooms, she says. When Encode tries to work with schools, they often seem overwhelmed with information and uncertain of what to do. But in the student testimonies collected by the group and shared with EdSurge, students are struggling with the problem.
AI should empower people rather than control them, says Suchir Paruchuri, a rising high school senior and the leader of the Texas chapter of Encode.
It’s important to limit who has access to student data, he says, and to incorporate the voices of those affected into decision-making processes. Right now, his chapter of Encode is working on local legislative advocacy, particularly on non-consensual sexual deepfake policies, he says. The group has tried to push the Texas State Legislature to consider students’ perspectives, he adds.
The goal is “AI safety,” Paruchuri says. To him, that means making sure AI is used in a way that protects people’s rights, respects their dignity and avoids unintended harm, especially to vulnerable groups.
Source link