Current News

/

ArcaMax

California Democrats push stronger 'guardrails' for kids using AI chatbots

Grant Stringer, The Mercury News on

Published in News & Features

California lawmakers are again trying to regulate AI chatbots used for socializing by an estimated one-third of teens, amid lawsuits and growing concerns that the technology can push vulnerable kids to self-harm.

Advocates say stiffer regulation will protect youth and curb a rash of suicides tied to the generated chats. Platforms have their own rules for minors seeking out AI companions, which can be personalized to simulate conversations with anyone from Napoleon Bonaparte to a Teenage Mutant Ninja Turtle or your own father. But lawmakers say those limits fall short of protecting young users.

“Children using AI companion chatbots today have no guarantee that the platform they’re talking to won’t push them toward self-harm, manipulate their emotions or exploit their data,” said California Assembly member Rebecca Bauer-Kahan of San Ramon in a statement.

A study released last year by Common Sense Media, a San Francisco-based group advocating for limits on youth use of the bots, estimated one-third of teens used AI chatbots as friends, confidants or romantic interests. While the vast majority of interactions with chatbots aren’t dangerous, a spate of disturbing interactions has allegedly led to teens’ deaths.

Legislation introduced by Bauer-Kahan would give parents more power over interactions with AI chatbots by requiring that companies allow parents to set time limits and control whether a bot remembers conversations with children.

Gov. Gavin Newsom vetoed an earlier version of the legislation in October, saying its requirement that developers prove the bots did not encourage self-harm, suicidal ideation and violence would “unintentionally lead to a total ban on the use of these products by minors.” The governor has at times been a friend to tech companies, which form a key part of California’s tax base. A tech trade group opposed that bill.

Newsom did sign off on regulations requiring chatbots to remind users that it is not human, encourage children to take breaks every few hours and send crisis resources if a chatbot’s filter senses the user is suicidal.

In trying to overcome the governor’s veto, the new legislation dropped the requirement that chatbot companies prove they can prevent all harm to young users. Oakland Assembly member Buffy Wicks has also signed onto the bill.

The debate comes as the chatbot industry expands rapidly. Nearly 340 companion chatbot products have been released since 2022, according to the tech news site TechCrunch.

In August, the parents of a Southern California teen sued OpenAI in San Francisco Superior Court, alleging ChatGPT encouraged him to carry out his suicide. Other California parents have since filed similar lawsuits against the AI giant, which has denied allegations.

The company Character.ai offers a slew of personalized chatbots that mimic people from history, fiction or your own community — including a barista who is always happy to see you. In October, Character.ai announced it was banning minors after a Florida mother claimed her teenage son was abused and encouraged to commit suicide by its characters. However, the parent resource site Internet Matters says young users can easily skirt the company’s restrictions by claiming to be older when signing up.

 

“We think that AI companion chatbots are not safe for kids under the age of 18. Period,” said Jim Steyer, CEO of Common Sense Media and brother of billionaire gubernatorial candidate Tom Steyer. “Because we’ve seen really significant consequences, at times lethal consequences, of an AI arms race that is moving very quickly.”

Steyer and other supporters of the new bill expect lawmakers to amend it, along with its counterpart in the state Senate carried by Southern California Democrat Steve Padilla, before the legislative session ends in August.

Shomit Ghose, a Silicon Valley venture capitalist and adjunct professor at the University of San Francisco who writes about AI, said Bauer-Kahan’s new bill has “real guardrails” on youth chatbot use.

But, he said, even this stronger proposal would fall short of mitigating “the deeper danger” posed by chatbots.

“Our latest runaway technology, agentic AI, doesn’t just respond; it pursues goals, adapts to its user, and builds persistent behavioral models,” Ghose wrote in an email. “A chatbot that remembers your teenager’s insecurities and emotional patterns across months of conversation isn’t well-described by any content filter.”

Other academics say California Democrats would be well to leave the topic alone. Lawmakers could cause more harm than good by limiting chatbots that some kids already rely upon, said Eric Goldman, a Santa Clara University law professor and the director of its tech law institute.

“Sacramento hates the industry that pays its bills,” he said.

It remains to be seen if the scaled-back bill will win over Newsom and tech industry advocates. A tech trade group that opposed Bauer-Kahan’s bill last year, the Computer and Communications Industry Association, did not respond to an email seeking comment. Newsom’s spokesperson declined to comment.

_____


©2026 MediaNews Group, Inc. Visit at mercurynews.com. Distributed by Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus