Dive Brief:
-
A new artificial intelligence vetting checklist from the nonprofit Future of Privacy Forum aims to help schools and districts ensure student data privacy is safeguarded as they navigate the creation of AI use policies for students and staff.
-
The checklist shares how districts can ensure AI ed tech products comply with local, state and federal laws — similar to when vetting other ed tech offerings. Schools should also know how the AI tool will be used and ask service providers if the tool will require students’ personal information, and, if so, whether that use complies with existing law.
-
If the AI ed tech does use student data, schools need to be prepared to explain to teachers, students and parents how so, the Future of Privacy Forum said. Additionally, schools should determine if student data will be used to train the AI tool’s large language model.
Dive Insight:
Many of the K-12 AI frameworks that have been released by various organizations — and even by some state education departments — express the need for districts to follow privacy laws. But they don’t often clarify how to do so.
According to the Future of Privacy Forum, its checklist and related guidance are designed to address that gap.
A key takeaway, the organization wrote, is that schools should have a process for vetting ed tech tools — and if they don’t, the momentum around generative AI should propel schools to develop such policies and procedures.
The main federal privacy laws schools should consider when vetting AI ed tech include the Family Educational Rights and Privacy Act, the Protection of Pupil Rights Amendment and the Children’s Online Privacy Protection Act, according to the Future of Privacy Forum.
The Protection of Pupil Rights Amendment requires schools to allow parents to review instructional materials while also restricting the collection of certain sensitive information. FERPA requires schools to allow parents and students to review education records and says schools can only share records with their consent or by ensuring certain safeguards are in place. COPPA applies to for-profit companies and limits their ability to collect personal data from children younger than 13.
COPPA 2.0 — an updated and expanded version of COPPA — is currently being considered by lawmakers in Congress. The proposal would ban targeted advertising to children and teens and expand online protections to apply to students under the age of 17.
The Future of Privacy Forum noted there are over 128 state student privacy laws that schools need to keep an eye out for, as these laws typically have extra requirements on sharing student data with ed tech vendors.
Just months after OpenAI first released its generative AI tool, ChatGPT, in late 2022, school district technology leaders flagged concerns over how to protect student data. If students share their personal information with AI products like ChatGPT, someone could potentially search for and even access that information via AI, ed tech experts have previously warned.