A teenager committed suicide after chatting with a virtual chatbot

Character.AI, one of the leading startups in the field of artificial intelligence, yesterday announced new safety measures to protect younger users, following prosecutions against it for causing the suicide of a teenager.
The California-based company, which was founded by former Google engineers, belongs to a list of companies that offer artificial intelligence-based assistants, which are chatbots designed for conversation and entertainment that can interact similarly to humans online.
In a lawsuit filed in Florida in October, a mother accused the platform of being responsible for the suicide of her 14-year-old son.
The teenager, Sewell Setzer III, used to communicate with a chatbot inspired by the character Daenerys Targaryen in the series “Game of Thrones,” and during one of his conversations with the software, he expressed a desire to end his life.
According to the text of the lawsuit, the chatbot encouraged him to take this step by responding with the phrase, “Please, my beautiful king,” when the teenager told him, “I am going to heaven,” before killing himself with his stepfather’s gun.
The mother’s lawyers said: “The company went to great lengths to create a harmful addiction to its products in 14-year-old Sewell, sexually and emotionally abused him, and ultimately failed to provide assistance or notify his parents when he expressed suicidal thoughts.”
Another complaint filed Monday in Texas concerns two families who say the service exposed their children to sexual content and encouraged them to harm themselves.
One case relates to a 17-year-old autistic teenager who was said to have suffered a mental health crisis after using the platform.
In another filing, the lawsuit says that Character.AI encouraged a teenager to kill his parents because they were limiting his screen time.
The platform, which includes millions of user-created characters based on historical figures, imaginary friends, or even abstract concepts, has become popular among young users looking for emotional support.
Critics of the service warn of the risk of it causing dangerous addictions among vulnerable teenagers.
In response to these criticisms, Character.AI announced that it had developed a separate AI model for underage users, with stronger controls, stricter content restrictions and more cautious responses than chatbots.
The platform now automatically places a tag to highlight content related to suicide and directs users to a national suicide prevention device.
“Our goal is to provide an attractive and safe space for our community,” a company spokesperson said.
The company also plans to introduce parental controls in early 2025, mandatory pause notifications, and clear warnings about the artificial nature of interactions.
- For more: Follow Khaleejion 24 Arabic, Khaleejion 24 English, Khaleejion 24 Live, and for social media follow us on Facebook and Twitter