As states slowly begin to implement artificial intelligence guidance for local school districts, ILO Group released a framework Monday for how state education agencies can advise schools on the quickly evolving technology.
ILO Group, an education strategy and policy firm, breaks down AI guidance into four areas of consideration for states — political, operational, technical and fiscal.
Under the political category, the consulting firm suggests officials establish a state-level task force centralized around AI in education. Within that task force, officials can dig into the potential effects of AI, develop guidelines and policy recommendations, bolster AI literacy and create government oversight structures that uphold accountability measures when implementing AI in schools.
By working with focus groups and conducting surveys, states can create stakeholder engagement plans to better understand feedback, ideas and concerns with AI in classrooms, ILO Group said. States can also develop their own vision and principles for responsible and ethical AI use alongside a framework that helps schools understand how to purchase and use these tools.
The report suggests developing a state AI roadmap to outline that vision in phases over two to three years.
To enforce this, states should appoint an AI director to ensure compliance with a state-developed responsible AI framework, the report suggests. An internal governance structure can regularly audit AI for potential biases, errors or unintended consequences.
This structure should maintain compliance with federal student data privacy laws, including The Family Educational Rights and Privacy Act and the Children's Online Privacy Protection Act. Overall, states need to develop a strategic communication plan for all of their initiatives and plans, the ILO Group suggests.
Other suggestions from the ILO Group for carrying out state-level AI guidance for schools include:
- Potentially mandating school districts to disclose whether they are using free versions of AI tools for their students or teachers.
- Dedicating a funding stream to support educational AI initiatives, such as professional development investments or helping districts procure AI tools — with enhanced security and privacy features — at a reduced cost.
- Establishing an AI technical support network to help provide resources, best practices and broader support for districts to better secure their AI data.
So far, at least seven states have released guidance for K-12 leaders to navigate AI usage: California, North Carolina, Ohio, Oregon, Virginia, Washington and West Virginia. More recently, federal guidance on the issue was proposed in the bipartisan NSF AI Education Act of 2024 in the Senate, which would authorize the U.S. National Science Foundation to develop guidance on artificial intelligence in pre-K-12 classrooms.
Meanwhile, teachers’ familiarity with generative AI is on the rise. A survey of 1,003 teachers conducted in May by the Walton Family Foundation said 79% of teachers are somewhat or very familiar with ChatGPT, compared to 55% in February 2023. Students are also more aware of the technology, with 75% saying they are familiar with ChatGPT in May versus 37% last year.
The flurry of guidance across district, state and the industry continues to roll out as superintendents and principals grapple with ongoing issues like AI plagiarism and deepfakes — all while feeling pressure to seek opportunities to innovate in the classroom with AI.