Artificial intelligence-generated content is stirring up misinformation and impacting students’ daily lives, but schools can play a role to help children and teens navigate the evolving problems.
From steering through news in the run-up to this year’s presidential election to handling concerns over AI-generated nude photos of students, “high schools are going to have their work cut out for them on how they help students around this,” said Laura Tierney, founder and CEO of the Social Institute, which offers resources and tools to schools on social media and technology.
Already, the issue of pornographic deepfakes is beginning to surface in K-12 schools. Tierney cited a recent case where AI-generated nude images of students circulated at Westfield High School in New Jersey in October. The incident even led one of the victims to advocate for federal legislation to prevent the spread of deepfake pornography.
When it comes to the reach of misinformation via deepfakes, Tierney said that “if it can happen on a larger scale, it can definitely happen on a local level" with students and schools.
Tierney pointed to an example in Slovakia, where a fake audio recording generated by AI was released days before a major election. The viral recording recreated the voice of a top candidate to make it sound like he was bragging that he rigged the election, CNN reported. In September, the candidate was defeated, sparking fears that deepfakes could manipulate U.S. elections.
These examples show why media literacy should be taught at a young age, said Erin McNeill, founder and CEO of Media Literacy Now. The nonprofit advocates for K-12 media literacy education nationwide.
For McNeill, the deepfake incident in New Jersey is one of the worst-case scenarios for schools. Media literacy education is one tool that could have prevented the situation from unfolding in the first place, she said. But schools and teachers need help and resources in teaching these skills.
Young people “don’t necessarily understand the consequences of their actions,” McNeill said. "We’re leaving them just completely exposed to these consequences with no guidance, and the risk is just huge."
She added, “It’s absurd that we’re allowing this to happen. It’s our fault. It’s not these kids’ fault.”
Tierney said she doesn’t see the problem of deepfakes going away anytime soon. If anything, she expects more examples to “surge” in the news. However, she said, “schools can use those moments as catalysts for conversation with students, and they can huddle with students about ‘how do you tell if something is a deepfake.’”
Teaching students how to spot deepfakes
As news of deepfakes emerges, Tierney said schools can use the examples, like the election scenario in Slovakia, as teaching moments. In the classroom, teachers could challenge students to consider their own strategies to avoid being manipulated by deepfakes — and then exchange their ideas with each other.
Schools shouldn’t fixate so much on educating every teacher about AI, Tierney said. Rather, she said, “I think what’s even more innovative is empowering the students to share and exchange ideas with each other, because they’re the ones who are going to be on the forefront of this innovation.”
Tierney’s organization, the Social Institute, developed a process to guide students if they discover a fake explicit image of themselves or a peer.
Under this "SHIELD" approach, upon first seeing the image, students should “Stop” and "Huddle" with a trusted adult. Then, they should “Inform” the social media platforms where it was posted, collect “Evidence” by taking a screenshot, for instance, and “Limit” access to the content by blocking the users or accounts responsible for spreading the image. If the fake image involves the student themself, they should “Direct” their peers how they can help spread the word that the image AI-generated.
Core media literacy skills are also key to identifying fake images and content created by AI, McNeill said. Students should ask themselves who created the content and why. Asking students to critically think about the source of information can be a short, daily exercise that teachers facilitate, McNeill said.
“That’s this theory where you expose people to a little bit of misinformation and let them know that this exists, and you have a chance to investigate it, use your detective skills, which is what’s really fun about media literacy for young people,” she added.
Addressing media literacy on a wider scale
As of now, only four states require media literacy education in K-12 schools, including California, Delaware, New Jersey and Texas.
However, some school leaders are addressing the issue on their own.
At Baldwin Union Free School District in New York, for example, the district launched a media literacy curriculum covering all social studies and English language arts classes for middle and high school students, according to Superintendent Shari Camhi.
Developed in partnership with Stony Brook University three years ago, the lessons aim to help students identify whether information they see and hear is truthful, Camhi said. Some of the examples cited in the news literacy classes are AI-generated content, she noted.
“This is an example of a lifelong skill that’s imperative, and whether it’s AI or not AI, it’s just being able to identify if whether the information that you are receiving is truthful,” Camhi said. Also through this curriculum, high school students can earn a college credit in media literacy from Stony Brook, she added.
School leaders should actively seek to address news literacy and AI misinformation with students, Camhi said. While national policy here is still uncertain, it’s important that education leaders have a seat at the table during those discussions.
“If you graduate high school and you’re not the greatest math student — that’s not a good thing, but you can get through life maybe if you don’t know your trigonomic function,” Camhi said. “But if you don’t know the difference between fact and fiction, that’s a problem. Just at a very fundamental level for our country, that’s a problem.”