Dive Brief:
-
The U.S. Department of Education’s Office of Educational Technology released its first-ever report weighing the possibilities and risks of tapping into artificial intelligence for teaching, learning and assessments in schools.
-
When bringing AI into education, the report stresses adopting a “humans in the loop” approach. This means that rather than allowing AI technology and tools to replace teachers, educators should instead be the central decision-makers for instruction and choose how AI is implemented into their work.
-
The report suggests any conversation about integrating AI into schools should begin with acknowledging existing school-level needs and priorities. “We especially call upon leaders to avoid romancing the magic of AI or only focusing on promising applications or outcomes, but instead to interrogate with a critical eye how AI-enabled systems and tools function in the educational environment,” the report’s authors wrote.
Dive Insight:
The Education Department’s first-ever report on AI comes as the technology and conversations around these tools develop rapidly in the education space, and as part of President Joe Biden administration’s efforts to examine approaches and advance opportunities in artificial intelligence.
As AI unfolds quickly, the report states, policymakers and education stakeholders need to work together now “to specify the requirements, disclosures, regulations, and other structures that can shape a positive and safe future for all constituents — especially students and teachers.”
The report also outlined policies that are “urgently needed,” including:
- Using automation to improve learning outcomes while protecting human decision-making.
- Ensuring AI is relying on fair and unbiased pattern recognition and decisions in education.
- Looking at the potential ways AI could increase or worsen equity for students.
- Placing human checks and balances on AI systems while limiting any technology that undermines equity.
The report further emphasizes that as humans remain in the loop of using educational AI models, data privacy should be a top priority. AI developers and users should also actively work to minimize bias and promote fairness in AI tools by implementing protections from algorithmic discrimination, which the report defines as “systematic unfairness in the learning opportunities or resources recommended to some populations of students.”
Ed tech experts have previously sounded alarms about data privacy concerns, especially with the popular generative AI tool ChatGPT. OpenAI, the research lab and company that runs the technology, has an “elusive” data privacy policy, stating it will share its information with anybody, said Keith Bockwoldt, chief information officer of Hinsdale Township High School District 86 in Illinois, during a Consortium for School Networking conference panel in March.
These data privacy concerns remain even when districts block ChatGPT on their networks and devices, because students can access the technology at home, Bockwoldt said.
In December, New York City Public Schools blocked access to ChatGPT after school leaders requested that the nation’s largest school system do so. But now, the system is reversing that decision, according to an op-ed by New York City Schools Chancellor David Banks published in Chalkbeat.
“The knee-jerk fear and risk overlooked the potential of generative AI to support students and teachers, as well as the reality that our students are participating in and will work in a world where understanding generative AI is crucial,” Banks wrote.
The Education Department’s report is a sign federal officials are beginning to weigh in more heavily on AI decisions in schools.
During a Thursday panel discussion at the Reagan Institute Summit on Education, Mary Snapp, vice president of strategic initiatives at Microsoft, said the federal government “absolutely” has a role to play in mitigating the risks of AI, especially when it comes to privacy and fairness.
“We have to start with looking at the laws that we already have and applying those laws to AI,” Snapp said.
Additionally, there’s a need for an institution or organization outside of the federal government and the courts to quickly keep up with AI education policy needs, said panelist John Grant, who leads the internal ethics education program at Palantir Technologies and is a former adviser in the U.S. Senate. “I believe the Senate and the House and the courts — they don’t move fast enough.”
Kara Arundel contributed reporting to this story.