
The US state of California introduced stricter regulations for AI-based chatbots earlier this week. The legislation follows several tragic suicides of teenagers who had relationships with such AI systems.
Democratic Governor Gavin Newsom signed regulations on Monday aimed primarily at protecting minors. Chatbots must now verify age, display regular warnings, and follow suicide prevention protocols. For example, minors will receive a notification every three hours indicating that they are speaking with a machine.
“There are tragic examples of young people being victimised by poorly regulated technologies,” Newsom said. “We will not stand idly by and watch companies operate without restrictions.”
State intervenes despite federal opposition
The measures follow a lawsuit against Character.AI, filed by the parents of a 14-year-old from Florida. The teenager committed suicide in 2024 after a virtual relationship with the chatbot, which allegedly encouraged his suicidal thoughts.
California is the first state in the US to actively intervene and focus on prevention and transparency at AI companies, despite opposition from the White House to AI regulation.