Following a pair of lawsuits claiming chatbots caused the suicide of a teenage boy, groomed a 9-year-old girl and caused a vulnerable teenager to self-harm, Character.AI (C.AI) has announced a separate model specifically for teenagers, aged 13 and up , which should make their experiences with bots safer.
In a blog post, C.AI said it took a month to develop the teen model, with the aim of steering the existing model away from certain responses or interactions, making users less likely to see sensitive or suggestive content.”
C.AI said it is “evolving the model experience” to reduce the likelihood of children engaging in harmful chats – including bots that reportedly teach a teen with high-functioning autism to self-harm and deliver inappropriate adult content to all children whose families are suing tighten – it had to adjust both the input and output of the model.
To prevent chatbots from initiating and responding to malicious dialogues, C.AI has added classifiers that should help C.AI identify and filter out sensitive content from the output. And to prevent children from pushing bots to discuss sensitive topics, C.AI said it had improved “the detection, response and intervention regarding all users' input.” Ideally, this also includes blocking sensitive content in the chat.
Perhaps most importantly, C.AI will now connect children with resources if they are trying to discuss suicide or self-harm, which C.AI had not previously done. This frustrates parents who are filing a lawsuit claiming that this common practice for social media platforms should extend to chatbots.
Other safety features for teens
In addition to creating the model specifically for teens, C.AI has also announced other safety features, including more robust parental controls rolling out early next year. These controls allow parents to track how much time kids spend on C.AI and which bots they interact with most often, the blog said.
C.AI will also notify teens when they've spent an hour on the platform, which could help prevent kids from becoming addicted to the app, as parents claim. In one case, parents had to lock their son's iPad in a safe to prevent him from using the app after bots allegedly repeatedly encouraged him to self-harm and even suggested killing his parents. That teen has vowed to use the app the next time he has access, while parents fear the bots' apparent influence could continue to wreak havoc if he follows through on threats to run away.