New guidance from the U.S. Department of Education’s Office of Educational Technology outlines a “shared responsibility” mentality for ed tech providers to build trust with district leaders as they integrate artificial intelligence into their products and platforms.
This means ed tech providers need to actively manage risks from the rapidly evolving technology, the department said.
In the guidance issued July 8, the Education Department outlined eight "categories of risk" for using AI in schools:
- A “race to release.”
- Bias and fairness.
- Data privacy and security.
- Harmful content.
- Ineffective systems.
- Malicious uses.
- Misinformation management.
- Transparency and explainability.
- Underprepared users.
Reflecting current distrust with AI, the department cited a quote from Patrick Gittisriboongul, assistant superintendent of Lynwood Unified School District in California:
“Would I buy a generative AI product? Yes! But there’s none I am ready to adopt today because of unresolved issues of equity of access, data privacy, bias in the models, security, safety, and a lack of a clear research base and evidence of efficacy.”
A call for ed tech providers to share responsibility with schools in introducing AI into the classroom came from President Joe Biden in a 2023 executive order. Harnessing AI while mitigating its risks requires “a society-wide effort that includes government, the private sector, academia and civil society,” according to the executive order.
This month's guidance for ed tech providers also follows a 2023 Education Department report that stressed a need to keep a “humans in the loop” approach when using AI in schools.
Still, the department said this month that “asking an educator to review every use of AI or every AI-based output is neither practical nor fair.” So it’s important that ed tech providers also take shared responsibility when reviewing the uses and outputs of AI, the department added.
The agency outlined five key areas for ed tech providers to consider in developing this shared responsibility with schools:
- Designing for education. Developers should begin to understand specific education values and challenges. Educator and student feedback should be included in all aspects of product development.
- Providing evidence of rationale and impact. Educational institutions need evidence of an ed tech tool’s advertised solutions.
- Advancing equity and protecting civil rights. Ed tech providers should be aware of representation and bias in data sets, algorithmic discrimination, and how to ensure accessibility for students with disabilities.
- Ensuring safety and security. Ed tech providers need to lay out how they will protect the safety and security of users of AI tools.
- Promoting transparency and earning trust. To build trust with district leaders, ed tech providers, educators and other stakeholders need to collaborate.
The department’s guidance noted that states and school districts are also developing their own AI use guidelines. As of June, 15 states have released resources for integrating AI in education. Ed tech providers, the agency added, should review relevant school and state AI guidance as they look to work with school districts.