2023 was a big year for schools to begin dipping their toes into the world of generative artificial intelligence.
The technology has raised important questions and brought new guidance for educators since ChatGPT entered the scene over a year ago. District leaders have wondered about AI's implications for academic integrity and data privacy, while policymakers and industry leaders have offered cautious excitement about the opportunities while scrambling to navigate the concerns.
K-12 Dive spoke with AI educational experts about what's in store as education leaders consider whether and how to embrace the technology in 2024 and beyond. Here are their five takeaways.
Districts will issue more comprehensive guidance
It’s likely that more school districts will develop comprehensive frameworks regarding AI use, predicts Joshua Wilson, a professor at University of Delaware’s School of Education.
“I’d like to see districts move in that direction. I think increasingly there will be resources for districts to do that work,” said Wilson, who researches AI use in schools. Wilson added that district-developed guidance on AI school use will help such technology policies move in a safe, effective and equitable direction.
Alex Kotran, CEO of The AI Education Project, said he is working with states and school districts to develop their own AI education policies. The AI Education Project is a nonprofit that promotes AI literacy education.
“The challenge is that there’s like two separate threads. You need immediate policies that deal with things like cheating. Is it cheating to use ChatGPT? Probably is,” Kotran said. Then, he said, schools need to know “what are the tenets of acceptable use? Well, you need to make sure you’re not giving up student data or personally identifiable information. You shouldn’t be putting that into ChatGPT.”
Kotran added that it's crucial that professional development on AI literacy is part of any education policy involving the technology — and that those efforts are scaled out districtwide.
More teachers will lean into AI
Likewise, Wilson said coordinated professional development plans will be needed as districts develop AI guidelines at the teacher and administration levels.
“What we will see in 2024 is that more and more individual teachers will start to use it, and they might find things that they might share with their friends and their colleagues, but that’s not enough,” Wilson said. “The field really needs some structured opportunities for teachers to engage with experts to really understand the technology — its potentials and its limitations — and then understand how to use it effectively, equitably, safely.”
While Wilson said it’s likely a trend for AI professional development will emerge this year, he said he doubts it will happen quickly enough. It’s especially crucial to focus on teachers first because they'll be the ones teaching students about AI tools, he added.
Gaining proficiency with AI could save educators time with lesson planning, among other tasks, and allow them to focus more on supporting students instead, Wilson said.
AI use in schools has potential well beyond lesson planning, advocates say.
Instead of focusing on the efficiency aspect of AI, schools should consider more transformative ways to put AI to use in the classroom, said Punya Mishra, who serves on the technology and innovation committee for the American Association of Colleges for Teacher Education. Mishra is also associate dean of scholarship and innovation and an education professor at Arizona State University.
“How can it change fundamentally the way our students learn, or [how] our teachers teach or our schools function?” Mishra asked. “Middle schoolers could write computer simulations using just English. But I see fewer examples of those [innovations], and I see more examples of ‘Oh, this can help you create test questions.’”
Equity issues persist
Districts will often tout their commitment to equity and accessibility when discussing AI, Kotran said.
But “what they haven’t necessarily grappled with, I think, is equity with AI means every student in your district has access to the same model. You can’t have 80% of students using GPT 3.5 and 20% using GPT 4. That’s not equity.”
Chat GPT 3.5 is the free, accessible version of generative AI that solely operates through text. Chat GPT 4, however, requires a monthly subscription and provides an upgraded, more intelligent version of generative AI that can handle both text and images.
What equity in AI truly looks like, Kotran said, is that districts pay for every student to use the same model, which requires buy-in from the school board. That requires a procurement strategy that ensures AI tools adhere to district policies, he said.
While public schools grapple with adopting AI tools and policies, Kotran said, some private schools are already launching AI classes. These signs of inequity are likely to persist moving forward, he added.
A cautionary test amid AI ‘gold rush’
Mishra said he fears that school leaders have recently felt pressured to quickly adopt and purchase AI-based ed tech tools amid an AI "gold rush." Given that, he said, district leaders may rush into procuring the technology without vetting ed tech companies and their products closely enough.
“That’s my big worry in the K-12 space, is that everybody who had any kind of an ed tech product is just slapping AI on it in some shape or form, because that’s what they know is going to sell or is going to catch people’s attention,” Mishra said.
He suggests district leaders ask ed tech companies specific questions:
- What does the product do?
- Is the product actually using generative AI?
- How does the tool navigate implicit biases inherent in AI?
In addition, Mishra expressed significant concern about how data privacy is handled when ed tech companies tap into AI.
Both Mishra and Wilson noted that companies will train their AI models using student data, and school leaders need to be aware of that in their contracts.
“Increasingly, institutions are realizing that their data is not just something to be secured and something to be kept safe, but also is a huge commodity,” Wilson said. “My fear is that districts may not realize that.”
AI watermarking is ‘futile’
While watermarking AI-generated content could be an ideal solution for addressing plagiarism fears, Kotran and Mishra agree it’s not likely to be logistically possible anytime soon.
President Joe Biden’s AI executive order, issued in October, called for the U.S. Department of Commerce to establish guidance for content authentication and watermarking to flag when something is AI-generated. Such processes could help address plagiarism concerns because it would allow teachers to better discern a student’s original work versus an AI creation.
“It’s only technically feasible currently with images and deepfakes,” Kotran said. “I have not seen any convincing research that suggests there’s a way to watermark text.”
For instance, a simple screenshot cropping out a watermark is just one way to dodge the strategy, Mishra said.
And no matter the technology used to watermark images, Kotran said AI language models are smart enough to find ways to overcome those protections.
“That’s a futile pursuit," Mishra said. "There’s no future in that. The whole anti-plagiarism software business on that is not working.”
As AI continues to quickly evolve, Mishra said he understands how difficult it can be for policymakers to “go deeper than generic statements while the ground is shifting so fast.”
At the same time, what leaders say now about AI may become irrelevant even six months from now, given the nature of this rapidly changing technology.