Dive Brief:
- The White House released a National Policy Framework for Artificial Intelligence on Friday that calls on Congress to protect children on AI platforms, including by mandating that parents be able to oversee their children’s privacy settings, screen time, and exposure to content.
- The Trump administration recommended that Congress create “commercially reasonable, privacy protective, age-assurance requirements” for AI tools that children and teens are likely to access. The framework also suggests that existing federal child privacy protections — such as data collection limits for model training and targeted advertising — should apply to AI systems.
- The framework is intended as a blueprint for congressional action, and the Trump administration said it wants to work with lawmakers in the coming months to turn the policy recommendations into legislation.
Dive Insight:
The release of the White House’s AI policy framework comes as multiple bills aiming to heighten federal online protections for children and teens have advanced through Congress.
The measures' path to enactment has not been all smooth sailing, however. While the House Energy and Commerce Committee earlier this month cleared three youth online safety bills, heated partisan debate rocked the markup session. House Democrats said they were particularly concerned about preemption requirements in the Kids Internet and Digital Safety Act — which advanced in a 28-24 roll call vote — that would limit states’ abilities to protect children and teens online through their own stronger regulations.
The White House framework, however, advises against federal policies prohibiting states from enforcing their own laws that protect children online and on AI platforms. This appears to be in line with Democrats’ stance against such state preemptions.
Another recommendation in the framework asks Congress to not preempt states’ own requirements for using AI, “whether through procurement or services they provide like law enforcement and public education.”
That language applies to a lot of current state regulation on AI in schools, said Amelia Vance, founder and president of the Public Interest Privacy Center, in an email to K-12 Dive on Monday.
“State laws requiring school districts to adopt AI governance policies, procurement safeguards, human oversight of AI-driven decisions about students, AI literacy standards, teacher training: those are all requirements governing how the state delivers public education, and they're likely protected,” Vance said of the framework.
Kris Hagel, chief information officer at Peninsula School District in Washington, said a main concern in his district is that students are using AI platforms for their own mental health support or to create nude deepfake images.
While it’s encouraging that AI youth protections are being discussed in the White House framework and in Congress, action needs to happen fast, he said — whether that’s at the state or federal level.
“At the end of the day, we need the protections,” said Hagel, who is a member of The Consortium for School Networking’s board of directors. “We can’t just sit and continue to wait for some of this to happen.”
The KIDS Act cleared by the House committee would also require new guidelines for AI chatbots that interact with young users. For instance, AI chatbots would have to disclose they are “not a natural person” in conversations with children and teens, and they would have to provide resources such as a suicide and crisis prevention hotline if a minor asks about suicide or suicidal ideation.
Likewise, the White House framework urges Congress to require that AI companies implement features “that reduce the risks of sexual exploitation and self-harm to minors.”
The Trump administration has supported policies and initiatives that protect youth online and that encourage schools to experiment and implement AI tools in the classroom. In May 2025, President Donald Trump signed the Take It Down Act, a law cited in the White House framework that criminalizes the use of AI to create deepfake nude images without the depicted person’s consent.
Several months later, in July, the U.S. Department of Education sent a “Dear Colleague” letter to district and state leaders guiding them on how to integrate AI in schools using federal grants that are already available.