Perhaps no technology since the advent of the internet itself has had the potential to disrupt education as much as artificial intelligence.
Though AI in a broader sense has been used in education applications ranging from scheduling to school security, classroom use of generative AI presents a host of challenges for schools. Among them: How can teachers ensure students haven’t used the technology to cheat on written assignments? How can students be best-equipped for a world in which convincing footage and images can be easily generated of practically any person saying or doing just about anything?
While there are plenty of major issues to address, there may be silver linings. Educators, for example, can use AI to assist in generating prompts for assignments or to help create lesson plans.
To help you stay on the cutting edge, K-12 Dive will keep this page up to date with the latest trends and developments as artificial intelligence's presence and role in classrooms evolve. Here are some recent highlights from our coverage.
Are schools communicating their AI policies to students well enough?
Some 37% of teens said they were unsure if their school had developed rules on AI use, according to a Common Sense Media survey.
By: Anna Merod• Published Sept. 18, 2024
A majority of teens — 70% — have used at least one kind of generative artificial intelligence tool, according to a survey released Sept. 18 by Common Sense Media. Teens between the ages of 13 and 18 reported being most likely to use AI for homework help (53%), “to stave off boredom” (42%) and to translate something into another language (41%).
Among those tapping into generative AI for school assignments, 46% of teens said they used generative AI without their teacher’s permission, compared to 41% of teens who said they did get permission and 12% who said they weren’t sure, Common Sense Media found.
As schools continue to develop AI guidance for students and staff, 37% of teens said they were unsure if their schools have established rules on AI, according to the survey. Meanwhile, 35% said their school has set AI use guidelines, and 27% reported their school has no AI rules.
The survey’s findings suggest that schools and teachers “may not have clearly communicated about or implemented rules for generative AI,” said Common Sense Media, a nonprofit that advocates for a safe and equitable digital world for children. The organization also rates and reviews entertainment and technology for families.
Common Sense Media partnered with market research company Ipsos Public Affairs to conduct a nationally representative survey in the spring, which included 1,045 paired responses from parents and their teens.
The lack of clear communication among schools regarding AI use policies comes as other research signals that many K-12 leaders are still in the process of developing that guidance for students and staff.
An August report released by Digital Promise, a nonprofit that advocates for innovation in education, found just a quarter of 31 school districts surveyed earlier this year had established specific guidelines for using the technology. However, 61% of districts said guidance development is underway.
As of June, 15 states had developed AI guidance for K-12 schools, the U.S. Department of Education found. However, advocates for the equitable implementation of AI in schools have called out state guidance for being inconsistent and disjointed.
With a large portion of students experimenting with AI tools, some research indicates teachers may not be as up to speed with them. For instance, by fall 2023, just 18% of teachers said they have used generative AI in their classrooms, according to a report by Rand Corp. and the Center on Reinventing Public Education.
Navigating student discipline guidance over AI use has posed some challenges for teachers, too.
During the 2023-24 school year, only about a third of teachers said they had received guidance on actions they should take when they suspect a student uses AI in a way that violates school policy, the Center for Democracy & Technology found. The number of students who faced disciplinary action due to AI-related plagiarism suspicions rose between the 2022-23 and 2023-24 school years, according to CDT.
Racial disparities are also beginning to emerge as teachers use AI detection tools to identify students’ use of the technology in plagiarized work. Common Sense Media found in its most recent survey that Black teens were twice as likely to have their schoolwork incorrectly flagged for relying on AI tools compared to their White and Latino peers.
“This suggests that software to detect AI, as well as teachers' use of it, may be exacerbating existing discipline disparities among historically marginalized groups, particularly Black students,” the report said.
Article top image credit: Stock Photo via Getty Images
Using AI in lesson planning? Beware hallucinations
The potential for artificial intelligence to present an incorrect or misleading response as fact remains a possible side effect for these tools.
By: Lauren Barack• Published Sept. 11, 2024
Generative artificial intelligence tools are making their way into classroom learning as teachers put them to use for planning lessons and creating assignment prompts.
However, experts warn that AI hallucinations — which occur when AI presents an incorrect or misleading response as fact — can crop up when tasking these tools with writing a biography or checking the results of a math problem. Researchers from Stanford and Yale universities found, for instance, that AI legal research tools from Lexis Nexis and Thompson Reuters produced hallucinations between 17% and 33% of the time.
Daniel Ho, one of the paper’s authors and a professor of law, political science and computer science at Stanford University, said that there needs to be “a lot of evaluation to see where AI systems can be reliable and helpful.”
A great deal of attention has been given to students using generative AI in their work, ranging from employing it as a research tool to using it to write entire essays — the latter of which has also sparked debate over academic integrity.
Less attention has been given to the challenges educators may face with the results they get when using it to help develop lessons and learning materials.
Fact-checking the results produced by generative AI tools, such as answers given to queries by ChatGPT — is a best practice for educators and students alike. However, why the hallucinations occur in the first place is a question Ho and his colleagues considered in their research released earlier this year.
Sycophancy can be one reason, the authors noted, as a “large language model” AI tool like ChatGPT may agree at the outset with a user “even when the user is mistaken,” the researchers wrote.
“One of the more challenging forms of hallucination emerges from the sycophantic nature of language models. They are trained to be obsequious to users,” said Ho, who also serves as a senior fellow at the Stanford Institute for Human-Centered Artificial Intelligence.
“When users ask questions with mistaken premises, for instance, AI may uniquely struggle in correcting those premises,” said Ho.
Article top image credit: Sitthiphong via Getty Images
Sponsored
Bringing AI to the classroom: Urgent questions for educators
Just when we thought artificial intelligence tools like ChatGPT were going to make human thinking obsolete, they have educators raising more questions than answers. But this is a good thing: Questions can lead to new thinking and innovative practices.
Remember that the school experience is not (or should not) be the same as when the adults in the building were in school. Local and global events, like the COVID-19 pandemic, impact the perspective and behaviors of educators, students, families and even the larger community.Advances in technology, like the creation of the iPhone and the rise of social media, influence literacy practices and how we access and share information.
Kids experience the world as it currently is, and it is our responsibility as educators to engage students in learning that is relevant to the times they are living in.
9 new questions about artificial intelligence and education
The rise of artificial intelligence tools, specifically ChatGPT, is an event that has caused a “stir” in the field of education. Some approach this new technology with a willingness to learn and integrate it into instructional practices, but others are more hesitant and weary of AI’s potential impact on how students generate ideas and engage in the writing process. Mixed reactions are completely understandable with an advancement that challenges what humans typically do.
I’ve had the opportunity to formally speak about AI and ChatGPT with various stakeholders in different forums: teachers, administrators, parents and caregivers, high school and college students, and university professors. There are trends to these conversations—new thinking and questions that beg us to consider the possibilities with artificial intelligence.
Here are some questions and talking points that may help your team engage in a productive discussion about artificial intelligence in the classroom and reach a consensus about how to best approach what will inevitably filter into schools.
What does it mean to be a critical user of AI?
Users should be highly aware that output from chatbots like ChatGPT is, in fact, artificially generated. Users should know that information isn’t always accurate and may not be generated in a way that communicates the intended meaning. How do we embrace this new era of critical thinking in teachable moments? Are there opportunities to reflect on how fact-checking, evaluating sources and synthesizing information is addressed in today’s curriculum?
What language skills emerge as imperative to students’ literacy development when considering using AI in the classroom?
When interacting with a large-language modellike ChatGPT, the user’s specificity in language when prompting is integral to the process; language around questioning, developing ideas and soliciting specific feedback may call for new teaching points in various subject areas. The use of academic vocabulary in prompting will vary depending on the task. Exercising control over language gives the user of AI agency.
What kind of thinking are we asking kids to do?
If AI can think through an “assignment” for a student, we may consider reflecting on the levels of thinking most assignments in courses ask students to engage in. Students should be invited to think creatively and critically where they feel like they have a stake in what is being taught and assigned. If students have a personal investment in what they are working on, they may be less inclined to rely on a robot for completion.
What kind of writing are we asking kids to do?
ChatGPT can be used as a tool to actually get better at writing. There are endless possibilities to use it for lessons in structure, craft, conventions and elaboration. But it’s a writing partner, not a substitute for the writer, and students need to be taught this. Additionally, educators may consider reflecting on what students are asked to write about. Is every student required to write about how a theme emerges in Macbeth? Imagine how many accounts of that question are stored in cyberspace.If we challenge kids to write about authentic ideas relevant to them, there may be less of a chance that AI-generated writing will give them the answer.
What instruction do we have in place that addresses critical reading, fact-checking and analysis of reliable and valid sources of information? How might curriculum or instruction need to change to address these skills in the world of AI?
These are not new skills, but they need to be addressed within the context of using AI. Perhaps curriculum teams could be formed by grade level or department to audit curricular areas where it may be appropriate to integrate explicit teaching of AI tools and the skills needed to use them ethically, responsibly and creatively.
How can we vary approaches to assessment so that there isn’t overuse or an overreliance on AI?
There are many ways to gauge students’ understanding of content and curricular themes. Presentations, debates, physical designs, visual representations, demonstrations, teacher-student conferences and peer conferences are all ways to evaluate student learning and progress. You can find many more strategies by visiting our Student Engagement and Instruction blog topics.
How can AI help teachers to differentiate learning material and support various student needs?
Teachers undoubtedly need to have a strong foundation in instructional strategies that promote student learning and why differentiated instruction is necessary in classrooms. Teachers develop their skill set through preservice coursework, reading, observing other teachers, engaging in professional development and reflecting on their own teaching experiences. There are no substitutes for this type of work, and prior experiences can inform teachers’ use of AI tools for planning.
Teachers need to know their students’ needs and the types of support that may benefit them before approaching AI use. AI tools can generate scaffolded questions, vary the text complexity of reading material, develop prompts for generating ideas, translate text into different languages and produce models similar to the assignment students may work on. AI can be a digital teaching assistant so that teachers can focus on the instruction and conferring with students in real-time to move their learning forward.
How does AI widen access to writing support?
The concept of students getting help with writing is not new. While all students have access to their teachers during the school day, only some have access to additional support from adults at home. Familiar questions are: How can I say this? Can you read this and tell me what I need to fix? What else should I add?
While conversation about writing can be helpful, AI may offer a digital conversation to those who don’t have adults available for support.
What privacy and security measures should be considered when using AI tools in school?
It is important that your technology team is involved in planning for using AI in school. There are education laws dedicated to data privacy. Teams will want to know the security and compliance measures of AI platforms the school is interested in using with education law.
While the content generated by platforms like ChatGPT is inherently artificial, this new technology is very much real. Involving multiple stakeholders in conversations about AI helps to generate responsible plans for use and to think creatively about next steps for implementation.
Lorraine Radice, PhD, is an educator, author, and presenter. She is currently an assistant superintendent for curriculum and instruction in New York and teaches courses in childhood education and literacy at Hofstra University. She is the author of the award-winning book, Leading a Culture of Reading.
Article top image credit:
Reka/Stock.adobe.com
Some school districts are still hesitant to put out AI guidance
A Digital Promise survey shows only 25% of districts have released AI guidance. One California district leader shares why he’s reluctant to do so.
By: Anna Merod• Published Aug. 28, 2024
As district and state guidance for artificial intelligence use in schools continues to vary widely, there are signs that a sizable proportion of districts are implementing the new technology even without widespread guidelines in place.
In a Digital Promise survey released in August, a majority of 31 school districts — collectively serving over 260,000 students — reported that at least some of their schools are using AI in classrooms. On top of that, 41% of the districts surveyed between May and July said they have purchased AI tools within the last year.
While most districts (75%) are currently offering professional development for teachers on the safe and effective use of AI, far fewer (25%) have set specific policies or guidance on the technology, according to Digital Promise, a nonprofit that advocates for equitable learning environments through technology. However, 61% noted that guidance development is underway.
The lack of official guidance and policy at the district level comes amid a widespread push by K-12 organizations and industry leaders to roll out AI frameworks for students and staff.
But some schools are still hesitant to quickly do so.
One example of this can be found at California’s Fullerton School District. The 11,500-student K-8 district does not have official guidance beyond a summer update to include AI in its digital responsible use policy for students, said Jeremy Davis, assistant superintendent of innovation and instructional support at Fullerton School District.
In that updated responsible use policy, students are expected to attribute AI when they use the tool in schoolwork, he said. The policy also states that students may not use AI to demean, bully or harass teachers or students, for instance, via deepfakes.
In the case of district-level AI guidance, Davis said he doesn’t “love the idea” of writing a policy for something that’s constantly changing. “Policies should be pretty vague and shouldn’t have to be changed every six months.”
But that doesn’t mean the district isn’t training teachers and staff on the best practices of AI use, he added.
The district focuses a lot of energy to guide staff on AI through hands-on professional development opportunities, Davis said. In November, Fullerton School District plans to do an “all-hands-on-deck” training for teachers. The training will involve model lessons on the do’s and don’ts of AI use, and teachers will be expected to incorporate it into their instruction with students.
Additionally, the district discusses AI use with all of its principals under the expectation that those conversations will carry into schoolwide discussions with staff. Davis said the district is highly encouraging of AI use, though there isn’t a “one-page document” thoroughly explaining and outlining guardrails.
For Davis, those management-level discussions with principals are more effective in getting the message out to staff on proper AI use in schools because it’s more likely to foster dialogue than a document released by the district.
Article top image credit: Stock Photo via Getty Images
Can AI ease teacher workload as a recruitment, retention strategy?
K-12 leaders and experts weigh in on whether AI tools have the potential to make teachers’ jobs more manageable and if that can ease staffing challenges.
By: Anna Merod• Published Aug. 26, 2024
Teachers are swamped.
They’re working longer hours compared to other professionals, and their job-related stress often comes from managing student behavior, earning low salaries, and performing administrative work that isn’t tied to instruction, according to an educator survey by Rand Corp. released in June.
Furthermore, an April study by Pew Research Center also found that 8 in 10 teachers don’t have enough time in the day to complete all of their work. And 81% of those teachers said a major reason for that is they “just have too much work.”
As some school districts begin to pilot artificial intelligence tools, however, teachers could see some of their workload burden alleviated.
While not a guarantee, if districts are thoughtful about which AI tools and supports they provide to teachers, the technology has the potential to improve teacher retention by making the job more manageable, said Bree Dusseault, principal and managing director at the Center on Reinventing Public Education. The research and policy analysis center at Arizona State University’s Mary Lou Fulton Teachers College focuses on innovative, evidence-based strategies to improve public education.
Dusseault said she has noticed two different ways districts are currently using AI to support teachers. The first is by improving their efficiency in daily tasks like lesson planning and communicating with families. Another is by providing tools such as tutoring or translation services that help teachers offer personalized learning to students, she said.
Anywhere generative AI tools can help teachers focus on their core roles and feel most effective with students, “I think that increases just enjoyment of the job and a sense of satisfaction,” Dusseault said.
How one Texas district is leveraging AI
There are many ways AI can help human tasks that typically take three to four hours to be completed in an instant, said Ángel Rivera, superintendent of Texas’ Mesquite Independent School District. In the case of teachers, that means they have more time to focus on students, he said.
In his 38,000-student school system, leaders are hoping a platform owned and developed by the district can leverage AI to help teachers better understand students before they even enter the classroom, Rivera said. The name for that platform, AYO, comes from a Yoruban word meaning "great joy."
AYO’s components include a social-emotional learning mood check-in for students and a personalized learning tool and lesson planner for teachers, said Cara Jackson, the district’s chief technology officer.
The district first launched AYO in 2020 during the COVID-19 pandemic. In 2023, officials decided to relaunch the platform for renewed attention following the pandemic, Rivera and Jackson said. This also provided an opportunity to pilot newer features that use generative AI.
The mood check-in feature allows students to privately report how they’re feeling that day, which allows teachers to gauge students’ well-being in their classrooms. The tool can also more quickly connect students to counselors during the school day if they report a negative mood, Jackson said.
A newer addition to AYO this year is the lesson planning feature, which is aligned with the state’s curriculum standards, Jackson said. Based on AYO’s student surveys, teachers can better understand their students’ interests, and the AI tool can then suggest concepts for lesson plans based on topics that excite students.
“So AI helps inform, but the teachers — actually the humans — get to decide about the data that’s presented to them,” Jackson said. “Whether it’s about a student or whether it’s about a lesson … the human still has that option to say, ‘No, you know what? That’s not right.’”
Mesquite ISD is one of 11 school systems nationwide awarded an Innovative School Systems Grant from CRPE and the Walton Family Foundation. The grant program provides funding and resources to allow school leaders “to pilot, refine, and scale new solutions that aim to make student learning more joyful, individualized, and relevant,” according to the program’s website.
According to CRPE’s Dusseault, AYO was developed in part to solve an issue brought forth by counselors and teachers, who expressed concerns about low-student engagement and attendance. So the district worked with those personnel to figure out what kind of data they needed to understand their students and to do their jobs more effectively, she added.
“I think that also helps with recruitment and retention, when you’re using the technology as an aid to solve a really specific problem that teachers are saying, ‘Hey, this is getting in the way of my job,’” Dusseault said.
Risks and guardrails
While AI can help teachers focus more on their relationships with students, Dusseault warns that if the technology is not implemented well, there are risks that these tools won't actually improve a teacher’s job or benefit students.
For instance, she said, there are a lot of AI tools available to educators that aren’t vetted or evidence-based. The risk comes into play when teachers opt to rely on those AI tools instead of high-quality curricular materials and tools approved by the district or state.
In June, the American Federation of Teachers released guidance on “commonsense guardrails” to consider when using AI in schools.
It’s important that schools think about the concerns involving privacy, security, safety and equity with AI, said Jeff Freitas, president of the California Federation of Teachers and one of the people behind the AFT report. Those issues should be addressed before asking how to use it for building curriculum.
Freitas also said he doubts AI can be used as a direct tool for recruitment and retention.
“I don’t think AI is going to drive that, nor do I think people are going to leave one school for another over AI,” Freitas said. “Salary, healthcare, location? Yes. AI? No.”
Mesquite ISD also takes concerns about student data privacy seriously, Jackson said.
“We’ve had a lot of questions ourselves about it,” she said. “How do we ensure that the students get to continue to own their own data and that data stays with them and that we’re not exposing their data anywhere?”
Rivera also noted that Mesquite ISD has developed and closely stood by its own AI principles, which mention that “data is not shared beyond the student, teacher and parent,” and that “data will not outweigh decisions of education professionals.”
Article top image credit:
Illustration: Cathryn Virginia for Industry Dive
What do schools need to know about AI paraphrasing detection tools?
Detection software can now spot text that summarizes generative AI, but one expert cautions against disciplining based on results.
By: Anna Merod• Published Aug. 22, 2024
Nearly two years since ChatGPT leapt onto the scene, upending the use of artificial intelligence in schools and beyond, companies have scrambled to provide tools that would help educators detect whether their students are passing off AI-generated work as their own.
But some researchers and civil rights advocates have expressed doubt that these types of AI detection systems are truly accurate and therefore helpful for schools as they navigate academic integrity policies.
The scope of these tools has now expanded to include at least one that can detect when students have used AI paraphrasing tools — meaning tools that adjust AI-generated text to cover up their use of generative AI. Turnitin, a plagiarism detection service, released the new feature this summer as an update to its previous AI detection tool.
As schools enter a new academic year, one expert from the Center for Democracy & Technology, a civil rights nonprofit, offers advice for how teachers and school leaders can best approach this latest development.
Don’t get caught up in the AI ‘arms race’
AI detection software is trained to spot text from AI generators and will eventually “get quite good” at identifying AI, said Hannah Quay-de la Vallee, senior technologist at CDT.
But then the generators will also change, adapt and improve, Quay-de la Vallee said.
How that “arms race” between AI detection software and AI generators like ChatGPT could influence the accuracy of detection software is worrisome, she said.
“Both of them are getting better over time, and what that means is that it’s hard to measure the efficacy of the … detectors in a long-term way,” Quay-de la Vallee said. What’s particularly concerning about these difficult-to-measure detection rates, she said, is that teachers may “think that they’re using a very accurate tool, and in reality, that accuracy fluctuates pretty consistently.”
Start conversations rather than rushing to discipline
Detection tools that target AI paraphrasing only further complicate the AI detector and generator arms race, Quay-de la Vallee said, because it’s one more thing for them to spar over. The paraphrasing detector just adds a new element into the mix that assumes humans are actively trying to deceive the detectors, she said.
Educators’ reliance on AI content detection tools rose 30 percentage points to 68% during the 2023-24 school year when compared to the prior year, according to CDT. The center also found that AI-related plagiarism suspicions drove an increase in student discipline due to AI use, from 48% to 64% between the 2022-23 and 2023-24 school years.
Quay-de la Vallee said she still fears schools will continue to over-rely on AI detectors to surface incidents of plagiarism that lead to disciplining students.
These detectors, however, can be used as a jumping off point for conversations with students, she said. “If you’re using the detector, that’s step one, and then you need to be talking to students and trying to figure out, like, what actually happened here.”
Districts should consider conducting a long-term analysis of AI detectors’ impact at individual schools, Quay-de la Vallee said. Some things to look out for could include how the detectors are being used, what these tools are finding, and how schools are reacting to or handling them.
It’s also important to also keep in mind that AI detectors could potentially cause schools to disproportionately discipline certain student populations based on the technology’s biases, she said.
Article top image credit: Stock Photo via Getty Images
How an Iowa district plans to embrace AI in the new school year
Starting this fall, Iowa City Community School District will pilot new AI guidelines governing how the technology is used in the classroom.
By: Anna Merod• Published Aug. 15, 2024
When Iowa City Community School District began exploring artificial intelligence use for students and staff in 2023, those spearheading the effort took a “go slow to go fast” approach.
With administrators in the 14,000-student district realizing AI wasn’t going away, they began to consider early steps for developing guidance, said Andrew Fenstermaker, the district’s instructional technology coordinator.
Fenstermaker formed an AI work group representing a variety of voices in the district, including students, administrators, support staff and community members. The group then spent the 2023-24 school year drafting student and teacher AI guidance. In May, the school board updated policies to reflect the AI drafted guidance as recommended by the work group, Fenstermaker said.
When Iowa City Schools students return to school on Aug. 23 for the 2024-25 school year, the district will be implementing a new curriculum that teaches them how to safely use AI, Fenstermaker said. The age-appropriate AI lessons are required by the school board.
The curriculum, for example, teaches the basics of using AI to K-2 students, along with discussing the technology's pros and cons. Students in higher grade levels will dive deeper into safe and responsible use of AI, he said.
Preparing districtwide AI guidance
Iowa City is certainly not alone in having worked on this issue in the last year.
This school year marks the second in which districts are having to navigate both the opportunities and challenges of generative AI in classrooms since ChatGPT entered the public eye in November 2022.
As of June, 15 states had released AI guidance for education leaders. And policymakers and industry leaders are continuing to develop guidance and frameworks for schools.
While some school systems are still wary of bringing AI into their schools, others are pushing forward as the 2024-25 school year gets underway.
Here are four takeaways from Fenstermaker on how Iowa City Schools approached the adoption of AI.
1. Develop an AI champion group
Over the summer, Fenstermaker put together an AI champion group with teacher representatives from each school building across the district.
The group is to meet monthly starting this fall, with three overarching goals in mind:
Understanding generative AI and its applications and implications for K-12 education.
Developing skills for using and applying AI tools effectively in the classroom.
Evaluating AI-powered tools, their functionality and outputs.
Fenstermaker developed those goals using the AI literacy framework from Digital Promise, a nonprofit that advocates for equitable learning environments through technology.
The AI champion group provides an opportunity to hear from a wide range of stakeholders, which will better inform the district as it continues to map out AI guidance, he said.
“The thing that I’m personally most excited about is probably the AI champion group,” Fenstermaker said. “That will be the opportunity for me to partner with a group of teachers that are in the space with students in real time every day and navigating that landscape.”
2. Test out guidelines
Iowa City Schools’ draft AI guidelines will be tested in grade 6-12 English Language Arts classes during the new school year, Fenstermaker said.
The district will collect feedback from those classes and continue to refine the guidelines. From there, Iowa City Schools hopes to implement them on a larger scale and include them in a student handbook by summer 2025, he said.
“Potentially, other districts are really wanting to go really, really fast and have all these things rolled out,” Fenstermaker said. “But the reality is, with it being such a complex, new thing that we’re trying to navigate with what is safe and responsible ethical use. And what it looks like in Iowa City is going to be different from other districts out there.”
The district tapped into guidance from other nonprofit organizations when developing its AI guidelines, Fenstermaker said. However, he added that it’s crucial to tailor outside guidance to your district’s needs.
What does Iowa City Schools’ AI guidance draft do?
The following direction is drawn from the Iowa City Schools AI policy drafted for the 2024-25 school year:
Students are prohibited from using generative AI for academic tasks “in any form.” However, the tool can be used to clarify academic content, to brainstorm or to gain feedback for improvement on an assignment. Students must cite that they used generative AI in their work.
Students may not use AI tools for cheating or plagiarism. Nor can they use AI to bully, harass or harm another student physically or emotionally. If a student is suspected of violating the academic code of conduct, the district can respond per board policy.
Teachers are prohibited from using generative AI to create content that replaces the district’s core curriculum. Teachers must also avoid using AI to replace their role as a human educator instructing students. They cannot use AI to compromise teacher or student data privacy. Teachers must only use AI apps vetted by the district.
Given the potential inaccuracies of AI detectors, teachers should use multiple approaches when navigating suspicions that a student violated the academic code of conduct with AI.
Overall, the district encourages teachers and students to be aware of the potential biases and potential fake information AI can generate.
3. Vet new technology
This fall, Iowa City Schools is rolling out a software vetting process for all ed tech tools, including those using generative AI.
Fenstermaker said this “will allow us, as a district, to really ensure that we are safeguarding the ways in which we’re leveraging instructional technology in the classroom spaces to ensure that there’s no cybersecurity issues, no data privacy issues.”
The process is part of updated school board policies calling for proper vetting of generative AI tools and resources, he said.
The district tested the vetting process last school year with one generative AI tool — MagicSchoolAI — so teachers could try it and provide feedback to administrators.
When MagicSchoolAI was vetted, the company became the first AI-related app developer to sign a student data privacy agreement with the district, Fenstermaker said. The AI tool has the capability to help educators write lesson plans, create assessments and effectively communicate, according to the company’s website.
4. Lean on partnerships
Digital Promise is currently working with 10 different school district leaders to help craft guidance regarding responsible and acceptable AI use for teachers and students, said Pati Ruiz, the nonprofit’s senior director of ed tech and emerging technologies. Ruiz also highlighted how Iowa City Schools, in its partnership with Digital Promise, developed AI guidance early on when there were few concrete examples to follow.
As Iowa City Schools prepared to further explore AI, the partnership became key as the nonprofit advised the district to establish a task force to gather perspectives from the school community, Fenstermaker said.
For other district leaders looking to implement AI, Fenstermaker advises that they lean into their stakeholders.
“Make sure you have great representation listening to the voices of all the stakeholders,” he said. “Find ways to build teacher and student capacity. At the same time, make sure you have those guardrails in place, so that as they navigate the landscape that it’s within those constraints.”
Iowa City Schools has also found it useful to bounce ideas off of neighboring districts, he said. The partnership with Digital Promise has helped the district to expand its network, connecting with educators across the state to understand what does and doesn’t work with AI use in schools.
“Realize the fact that you don’t have to tackle this alone. It’s a community journey. Let’s tackle it together,” Fenstermaker said.
Article top image credit: Permission granted by Iowa City Community School District
Anti-deepfake efforts ramp up in Congress as issue looms over schools
AI-generated deepfake images, audio and video pose a threat to students and staff. Two Senate bills aim to chip away at those challenges.
By: Anna Merod• Published Aug. 6, 2024
Congress is setting its sights on deepfakes as the deceptive artificial intelligence-powered technology increasingly poses a threat to schools.
Chris Young, principal at North Country Union High School in Vermont, said that while his school has yet to face any issues with deepfakes, there is increased risk to students and staff that the technology can be used against them.
That could include a student recording a teacher “or ultimately, a deepfake that could really harm someone’s reputation,” said Young, named the 2024 Advocacy Champion of the Year by the National Association of Secondary School Principals. “We all have to just be on the lookout and be vigilant about how we’re using technology, so it doesn’t lend itself to people using it inappropriately.”
NASSP CEO Ronn Nozoe told the Baltimore Banner in July that the organization has alerted the U.S. Department of Education and lawmakers about deepfakes. The association has asked for federal guidance to assist schools with these challenges and update protections for school leaders, Nozoe said.
In July, the Senate unanimously passed the Disrupt Explicit Forged Images and Non-Consensual Edits Act, or the DEFIANCE Act. The bill, which awaits House action, would permit victims who didn't consent to a sexual deepfake depiction of themselves to bring a civil action case against the person who generated the fake imagery.
Victims could recover $150,000 to $250,000 in damages, and courts could issue a temporary restraining order against the defendant in addition to requiring them to delete, destroy or stop displaying the AI-generated depictions.
Another bipartisan bill, introduced in the Senate in June, would go a step further by criminalizing the publication of deepfake pornography on social media and other online platforms. Under the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks, or the TAKE IT DOWN Act, online platforms would be required to remove such images within 48 hours of a victim’s “valid removal request.”
Sen. Ted Cruz, R-Texas, who introduced the TAKE IT DOWN ACT, said in a June statement that many women and girls are targeted by deepfakes and consequently have to “live with being victimized again and again.” Some states do provide legal remedies for victims, he said, but this bill would create a uniform federal statute to address the issue nationwide.
“By creating a level playing field at the federal level and putting the responsibility on websites to have in place procedures to remove these images, our bill will protect and empower all victims of this heinous crime,” Cruz said.
Even as federal legislation is introduced to address deepfakes, schools have a role to play in tackling the issue, said Anjali Verma, president of the National Student Council, an organization of middle and high schoolers advocating for student voice in federal education policies.
Verma, a high school senior at Pennsylvania Leadership Charter School, said she transferred out of another high school that had challenges with deepfakes several years ago. At her former school, a student created hundreds of fake nude images of girls who played sports at the school, Verma said. This was all done through an AI app on the student’s phone, she added.
People using AI technology such as deepfakes to hurt others need to be held accountable, she said.
AI apps that can mass-produce explicit deepfakes are “very harmful and very concerning to the overall student population, especially female students,” Verma said.
While Verma was not directly affected by the deepfakes, she said the situation made her feel unsafe. “It really is worrisome to see that people you grew up with — your best friends, people in your classes — they have the potential to do something that’s this horrific.”
Additionally, Verma said, schools need to educate students about being “digital first responders,” meaning if they see something explicit online, they need to report it or speak to a trusted adult. Schools should inform students about the harms of cyberbullying and sextortion, and stress the importance of fact checking and verifying information they see online, she said. But policies are needed to back up these ideas, she said.
From a school leader’s perspective, Young said his focus is to create a sense of community at his school, so students feel responsible for others’ well-being alongside their own. “I think the more that schools can do that, the less likely people are going to act in a way that jeopardizes anyone’s identity or reputation.”
Deepfake technology is likely to become an even bigger challenge for schools as it advances to a point where it’s difficult to decipher what’s real and what's fake. Even now, research has shown that people cannot reliably detect deepfakes.
From a policy perspective, however, Young said school staff and administrators pointed to the importance of due process if someone is accused of something that could have resulted from a deepfake. Without protections in place, school employees' careers can be harmed by a rush to judgment based on misinformation, he said.
Article top image credit: Andrew Harnik via Getty Images
How schools are implementing AI
Perhaps no technology since the advent of the internet itself has had the potential to disrupt education as much as artificial intelligence. Though AI in a broader sense has been used in education applications ranging from scheduling to school security, classroom use of generative AI presents a host of challenges for schools.
included in this trendline
Are schools communicating their AI policies to students well enough?
Using AI in lesson planning? Beware hallucinations
Some school districts are still hesitant to put out AI guidance
Our Trendlines go deep on the biggest trends. These special reports, produced by our team of award-winning journalists, help business leaders understand how their industries are changing.