Congress is setting its sights on deepfakes as the deceptive artificial intelligence-powered technology increasingly poses a threat to schools.
Deepfake technology uses AI to generate false images, audio or video recordings. Examples affecting schools have ranged from students creating fake nude images of their classmates to a high school staff member allegedly circulating a misleading audio clip of a principal in Maryland.
Chris Young, principal at North Country Union High School in Vermont, said that while his school has yet to face any issues with deepfakes, there is increased risk to students and staff that the technology can be used against them.
That could include a student recording a teacher “or ultimately, a deepfake that could really harm someone’s reputation,” said Young, named the 2024 Advocacy Champion of the Year by the National Association of Secondary School Principals. “We all have to just be on the lookout and be vigilant about how we’re using technology, so it doesn’t lend itself to people using it inappropriately.”
NASSP CEO Ronn Nozoe told the Baltimore Banner in July that the organization has alerted the U.S. Department of Education and lawmakers about deepfakes. The association has asked for federal guidance to assist schools with these challenges and update protections for school leaders, Nozoe said.
In July, the Senate unanimously passed the Disrupt Explicit Forged Images and Non-Consensual Edits Act, or the DEFIANCE Act. The bill, which awaits House action, would permit victims who didn't consent to a sexual deepfake depiction of themselves to bring a civil action case against the person who generated the fake imagery.
Victims could recover $150,000 to $250,000 in damages, and courts could issue a temporary restraining order against the defendant in addition to requiring them to delete, destroy or stop displaying the AI-generated depictions.
Another bipartisan bill, introduced in the Senate in June, would go a step further by criminalizing the publication of deepfake pornography on social media and other online platforms. Under the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks, or the TAKE IT DOWN Act, online platforms would be required to remove such images within 48 hours of a victim’s “valid removal request.”
Sen. Ted Cruz, R-Texas, who introduced the TAKE IT DOWN ACT, said in a June statement that many women and girls are targeted by deepfakes and consequently have to “live with being victimized again and again.” Some states do provide legal remedies for victims, he said, but this bill would create a uniform federal statute to address the issue nationwide.
“By creating a level playing field at the federal level and putting the responsibility on websites to have in place procedures to remove these images, our bill will protect and empower all victims of this heinous crime,” Cruz said.
Even as federal legislation is introduced to address deepfakes, schools have a role to play in tackling the issue, said Anjali Verma, president of the National Student Council, an organization of middle and high schoolers advocating for student voice in federal education policies.
Verma, a high school senior at Pennsylvania Leadership Charter School, said she transferred out of another high school that had challenges with deepfakes several years ago. At her former school, a student created hundreds of fake nude images of girls who played sports at the school, Verma said. This was all done through an AI app on the student’s phone, she added.
People using AI technology such as deepfakes to hurt others need to be held accountable, she said.
AI apps that can mass-produce explicit deepfakes are “very harmful and very concerning to the overall student population, especially female students,” Verma said.
While Verma was not directly affected by the deepfakes, she said the situation made her feel unsafe. “It really is worrisome to see that people you grew up with — your best friends, people in your classes — they have the potential to do something that’s this horrific.”
Additionally, Verma said, schools need to educate students about being “digital first responders,” meaning if they see something explicit online, they need to report it or speak to a trusted adult. Schools should inform students about the harms of cyberbullying and sextortion, and stress the importance of fact checking and verifying information they see online, she said. But policies are needed to back up these ideas, she said.
From a school leader’s perspective, Young said his focus is to create a sense of community at his school, so students feel responsible for others’ well-being alongside their own. “I think the more that schools can do that, the less likely people are going to act in a way that jeopardizes anyone’s identity or reputation.”
Deepfake technology is likely to become an even bigger challenge for schools as it advances to a point where it’s difficult to decipher what’s real and what's fake. Even now, research has shown that people cannot reliably detect deepfakes.
From a policy perspective, however, Young said school staff and administrators pointed to the importance of due process if someone is accused of something that could have resulted from a deepfake. Without protections in place, school employees' careers can be harmed by a rush to judgment based on misinformation, he said.