Emma Roth is a news writer who covers the streaming wars, user tech, crypto, societal media, and overmuch more. Previously, she was a writer and editor astatine MUO.
A caller measure projected successful California (SB 243) would require AI companies to periodically punctual kids that a chatbot is an AI and not human. The bill, proposed by California Senator Steve Padilla, is meant to protect children from the “addictive, isolating, and influential aspects” of AI.
In summation to limiting companies from utilizing “addictive engagement patterns,” the measure would require AI companies to supply yearly reports to the State Department of Health Care Services outlining really galore times it detected suicidal ideation by kids utilizing the platform, arsenic good arsenic the number of times a chatbot brought up the topic. It would besides make companies show users that their chatbots mightiness not beryllium due for immoderate kids.
Last year, a genitor revenge a wrongful decease lawsuit against Character.AI, alleging its civilization AI chatbots are “unreasonably dangerous” aft her teen, who continuously chatted pinch the bots, died by suicide. Another suit accused the institution of sending “harmful material” to teens. Character.AI later announced that it’s moving connected parental controls and developed a caller AI exemplary for teen users that will artifact “sensitive aliases suggestive” output.
“Our children are not laboratory rats for tech companies to research connected astatine the costs of their intelligence health,” Senator Padilla said successful the property release. “We request communal consciousness protections for chatbot users to forestall developers from employing strategies that they cognize to beryllium addictive and predatory.”
As states and the federal government double down connected the information of societal media platforms, AI chatbots could soon go lawmakers’ adjacent target.