
Lawsuits and security issues
Character.AI was founded in 2021 by Noam Shazeer and Daniel De Freitas, two former Google engineers, and raised nearly $200 million from investors. Last year, Google agreed to pay about $3 billion to license Character.AI's technology, and Shazeer and De Freitas returned to Google.
But the company is now facing multiple lawsuits claiming its technology has contributed to teen deaths. Last year, the family of 14-year-old Sewell Setzer III sued Character.AI, accusing the company of being responsible for his death. Setzer died by suicide after regularly texting and speaking to one of the platform's chatbots. The company is facing even more lawsuits, including one against a Colorado family whose 13-year-old daughter, Juliana Peralta, died by suicide in 2023 after using the platform.
In December, Character.AI announced changes including improved detection of infringing content and revised terms of service, but these measures did not stop underage users from accessing the platform. Other AI chatbot services, such as OpenAI's ChatGPT, have also come under scrutiny for the effects of their chatbots on young users. In September, OpenAI introduced parental control features aimed at giving parents more insight into how their children use the service.
The cases have caught the attention of government officials, likely prompting Character.AI to announce the changes for under-18s chat access. Steve Padilla, a Democrat in the California Senate who introduced the security law, told The New York Times that “the stories are piling up about what can go wrong. It's important to put reasonable guardrails in place so we protect the people who are most vulnerable.”
On Tuesday, Senators Josh Hawley and Richard Blumenthal introduced a bill to ban the use of AI companions by minors. Additionally, California Governor Gavin Newsom signed a law this month, effective January 1, requiring AI companies to have security rails on chatbots.