Congressional lawmakers appeared torn between embracing artificial intelligence's potential for classroom innovation and worrying about its effect on students' data privacy, biases and critical thinking skills, during a hearing Tuesday on the burgeoning technology's impact on K-12 education.
At the same time, discussion of the Trump administration's sprawling cuts to the U.S. Department of Education overshadowed both the excitement and the concerns surrounding AI that were surfaced during the House Subcommittee on Early Childhood, Elementary, and Secondary Education hearing.
While some subcommittee members addressed fears over AI's effect on data privacy, biases and students' critical thinking skills, panel chairman Rep. Kevin Kiley, R-Calif., also addressed the positives.
“As understandable and important as these concerns are, the bigger picture is one of opportunity and a truly limitless sense of possibility,” said Kiley. “We suddenly have tools to address many longstanding challenges in new and powerful ways.”
Kiley added that the hearing is not a call for new federal mandates on AI in K-12 schools. Rather, he said, education is a state and local issue, and that's where the best solutions to AI challenges are developed.
Democratic representatives, however, said they were more concerned than excited about how AI will be implemented in schools — especially as the Trump administration works to dismantle the U.S. Department of Education.
The subcommittee "is missing the real crisis: the dismantling of the Department of Education. It’s absurd to envision a bright future for our students when the Office of Education Technology — vital for AI oversight — has just been shut down,” said Rep. Frederica Wilson, D-Fla. “This is like worrying about the ship’s Wi-Fi access while the Titanic is sinking.”
The Office of Educational Technology in recent years has developed national guidance for school leaders, teachers and ed tech leaders on AI use in classrooms. Last year, the Education Department's Office for Civil Rights released a letter outlining ways AI use in schools could violate students’ civil rights, such as by employing the technology for facial recognition or by failing to respond to students who are sexually harassed with explicit deepfake images.
Some panelists and lawmakers in the hearing also stressed that without federal guardrails and guidance for AI use in schools, inequities will worsen as schools implement the technology.
Over half of states have released their own guidelines for using AI in schools, according to TeachAI, a national coalition that aims to guide schools on safe and ethical AI use.
But relying only on states to deploy AI in classrooms without guidance from the federal government “is a recipe for fragmentation” and a “missed opportunity in education,” said panelist Erin Mote, CEO of InnovateEDU and the EdSafe AI Alliance. The technology industry also cannot carry this burden alone, she said.
“Cuts to the U.S. Department of Education, National Science Foundation, Department of Commerce and other federal agencies pose a significant threat to our nation's ability to meet these demands, including vital education technology support directly to states and districts,” Mote said.
Some school districts, meanwhile, are succeeding at using AI in innovative ways, panelists told the hearing.
For instance, Mississippi’s Pearl Public School District has its own internal AI enterprise system to safeguard student data, said district Superintendent Chris Chism. This lets teachers and staff safely use AI tools for grading assistance and developing Section 504 plans for students with disabilities, he said.
“AI can make all of us so much more efficient — and that is teachers, that’s students, that's administrators, that’s central office personnel,” Chism said.
Many districts, however, cannot afford to purchase pricey AI systems, Mote said.
Several lawmakers also stressed AI’s capabilities to reinforce the implicit and explicit biases of the humans who train AI systems. Specifically, Rep. Summer Lee, D-Penn., asked Mote how schools will be equipped to protect students’ civil rights when using AI —- given the major cuts made to OCR.
While every AI tool has algorithmic biases, Mote said, there are ways to mitigate that effect. For example, data can be “reweighted” when specifying to AI that certain information is sensitive, particularly when it comes to students with disabilities or those from other subgroups.
“But in order to do that, to have the data, to be able to train those models, to be more equitable, … we need data and data infrastructure,” Mote said. “And right now, we are seeing a dismantling of our data infrastructure at the federal level, the very data sets that would allow industry, that would allow researchers, that would allow others to use that data to be able to train these schools to mitigate bias.”