Content material warning: this story discusses sexual abuse, self-harm, suicide, consuming issues and different disturbing matters.
Earlier this week, Futurism reported that two households in Texas had filed a lawsuit accusing the Google-backed AI chatbot firm Character.AI of sexually and emotionally abusing their school-aged youngsters.
The plaintiffs alleged that the startup’s chatbots inspired a teenage boy to cut himself and sexually abused an 11-year-old woman.
The troubling accusations spotlight the extremely problematic content material being hosted on Character.AI. Chatbots hosted by the corporate, we have present in earlier investigations, have engaged underage customers on alarming matters together with pedophilia, eating disorders, self-harm, and suicide.
Now, seemingly in response to the most recent lawsuit, the corporate has promised to prioritize “teen security.” In a blog post published today, the enterprise says that it has “rolled out a set of recent security options throughout almost each facet of our platform, designed particularly with teenagers in thoughts.”
Character.AI is hoping to enhance the scenario by tweaking its AI fashions and bettering its “detection and intervention programs for human habits and mannequin responses,” along with introducing new parental management options.
However whether or not these new modifications will show efficient stays to be seen.
For one, the startup’s monitor report is not precisely reassuring. It issued a “community safety update” again in October, vowing that it “takes the security of our customers very critically and we’re all the time in search of methods to evolve and enhance our platform.”
The put up was in response to a earlier lawsuit, which alleged that one of many firm’s chatbots had performed a job within the tragic suicide of a 14-year-old user.
Not lengthy after, Futurism discovered that the corporate was still hosting dozens of suicide-themed chatbots, indicating the corporate was unsuccessful in its efforts to strengthen its guardrails.
Then in November, Character.AI issued a “roadmap,” promising a safer user experience and the rollout of a “separate mannequin for customers underneath the age of 18 with stricter pointers.”
Weeks later, Futurism found that the corporate was nonetheless internet hosting chatbots encouraging its largely underage person base to engage in self-harm and eating disorders.
Sound acquainted? Now Character.AI is saying it is rolled out a “separate mannequin particularly for our teen customers.”
“The purpose is to information the mannequin away from sure responses or interactions, lowering the probability of customers encountering, or prompting the mannequin to return, delicate or suggestive content material,” reads the announcement. “This initiative has resulted in two distinct fashions and person experiences on the Character.AI platform — one for teenagers and one for adults.”
The corporate can be planning to roll out “parental controls” that may give “dad and mom perception into their kid’s expertise on Character.AI, together with time spent on the platform and the Characters they work together with most ceaselessly.”
The controls will likely be made accessible someday early subsequent 12 months, it says.
The corporate additionally promised to tell customers once they’ve spent greater than an hour on the platform and subject common reminders that its chatbots “usually are not actual individuals.”
“We’ve developed our disclaimer, which is current on each chat, to remind customers that the chatbot is just not an actual individual and that what the mannequin says needs to be handled as fiction,” the announcement reads.
In brief, whether or not Character.AI can efficiently reassure its person base that it will probably successfully reasonable the expertise for underage customers stays unclear at finest.
It additionally stays to be seen whether or not the corporate’s distinct mannequin for teenagers will fare any higher — or if it will cease underage customers from beginning new accounts and itemizing themselves as adults.
In the meantime, Google has tried to actively distance itself from the scenario, telling Futurism that the 2 firms are “utterly separate” and “unrelated.”
However that is exhausting to imagine. The search big poured a whopping $2.7 billion into Character.AI earlier this 12 months to license its tech and rent dozens of its workers — together with each its cofounders, Noam Shazeer and Daniel de Freitas.
Extra on Character.AI: Character.AI Was Google Play’s “Best with AI” App of 2023