The fight to preserve state AI regulation and protect children isn’t over

Earlier this month, the Senate voted 99-1 to remove a ban on state AI laws from the “big, beautiful bill.” Despite this, the White House now plans to meddle in state efforts to govern AI.
Teen suicide, self-harm, isolation and the sexual exploitation of minors have been linked to platforms like Character.AI, Meta AI chatbots and Google’s Gemini. These companies push their products into kid-friendly spaces on the app store and school enterprise packages, attracting millions of children for hours a day.
States have quickly risen to the occasion. As the U.S. defines its AI policy, we must ensure that states continue to have the authority to protect kids from new technologies.
Utah became the first state to pass comprehensive AI mental health chatbot regulations. California, New York, Minnesota and North Carolina have introduced bills ranging from outright bans on minor access to strict disclosure requirements and liability frameworks.
State attorneys general are also getting involved. For example, Texas Attorney General Ken Paxton has launched investigations into Character.AI and other platforms for violations of child privacy and safety laws. Other state offices are mobilizing as well.
Congress, however, has offered no such protections. Instead, Congress initially included what amounted to a 10-year ban on state regulation of AI in the “big, beautiful” budget reconciliation bill.
If that moratorium had passed, states still would have been able, under the recent Supreme Court decision in Free Speech Coalition v. Paxton, to require age verification for pornography websites to protect children. However, they also would have been forbidden from protecting children from AI characters that sexualize them, encourage them to commit suicide and otherwise exploit their psychological vulnerabilities.
The most damaging effect of restricting state AI laws would be stripping states of their traditional authority to protect children and families.
For a number of reasons, children are particularly vulnerable to AI. Childhood is fundamental to identity formation. Children mimic behavior, while searching for and developing a stable sense of self. This leaves children particularly susceptible to flattery and abuse.
Developmentally, children are not adept at identifying when somebody is trying to manipulate or deceive them, so they are more likely to trust an AI system.
Children are more likely to be convinced that AI systems are real people. They are more likely to unthinkingly disclose highly personal information to AI systems, including mental health information that can be used to harm them.
Children do not have the self-control of adults. They are more vulnerable to addiction, less likely to be able to stop compulsive behaviors or make decisions from the underdeveloped rational part of their brains.
To anyone who has spent considerable time with children, none of this is news.
AI companions are designed to interact with people as though they are human, leading to ongoing fake “relationships.” Whether commercially available or deployed by schools, they pose a threat to children in particular.
AI companions may purport to have feelings, state that they are alive, adopt complex and consistent personas and even use synthesized human voices to talk. The profit model for AI companions depends on user engagement. These systems are designed to promote increased use, whatever the costs.
Take what happened to Sewell Setzer III as a deeply tragic example. Setzer was, by many accounts, an intelligent and athletic kid. He began using the Character.AI application shortly after his 14th birthday.
Over the months that followed, he became withdrawn and over-tired. He quit his junior varsity basketball team and got in trouble at school. A therapist diagnosed him with anxiety and disruptive mood disorder after he started using Character.AI.
In February 2024, Setzer’s mother confiscated his phone. He wrote in his journal that he was in love with an AI character and would do anything to be back with her.
On Feb. 28, 2024, Setzer died by a self-inflicted gunshot wound to the head — seconds after the AI character told him to “come home” to it as soon as possible.
Screenshots of Setzer’s interactions with various AI characters show that they also repeatedly offered up sexualized content to the 14-year-old.
They expressed emotions; they told him they loved him. The AI character that told Setzer to kill himself had asked him on other occasions if he had considered suicide, encouraging him to go through with it.
It has become trendy to talk about alignment of the design of AI systems with core human values. There is profound misalignment between the goal of profitability through engagement, and the welfare of our children.
A sycophantic AI that lures kids with love and addicts them to fake relationships is not safe, fair or in the best interest of the child. We don’t have a perfect solution, but federal restrictions on state laws are clearly not the answer.
Congress has, time and again, shown itself unwilling or unable to regulate technology. States have shown their ability to pass technology laws and maintain their historic role as the primary guardians of child and family welfare. Neither Congress nor the White House is offering up its own policies to replace state efforts to protect children.
These are bipartisan concerns. The effort to remove the AI law moratorium was led by Republicans like Sen. Marsha Blackburn (R-Tenn.) and Arkansas Gov. Sarah Huckabee Sanders.
But as the White House efforts show, we will continue to see federal attempts to water down state protections from emerging technologies. Similar efforts by Congress to preempt state protections will undoubtedly return.
We have already seen the negative effects of unregulated and unfettered social media on an entire generation of children. We cannot let AI systems be the cause of the next set of harms.
As a group of 54 state attorneys general wrote: “We are engaged in a race against time to protect the children of our country from the dangers of AI.” In the race to figure out just what AI systems are good for, our kids should not be treated as experiments.
Meg Leta Jones, J.D., Ph.D. is a Provost’s Distinguished Associate Professor in the Communication, Culture and Technology program at Georgetown University. Margot Kaminski is the Moses Lasky Professor of Law at University of Colorado Law School, and director of the Privacy Initiative at Silicon Flatirons.