close
close
  • February 18, 2025
Lawsuit: A chatbot hinted that a child should kill his parents if screen time was exceeded

Lawsuit: A chatbot hinted that a child should kill his parents if screen time was exceeded

A child in Texas was 9 years old when she first used the chatbot service Character.AI. It exposed her to “hypersexualized content,” causing her to prematurely develop “sexualized behavior.”

A chatbot on the app cheerfully described self-harm to another young user, telling a 17-year-old “it felt good.”

The same teen was told by a Character.AI chatbot that he sympathized with children who killed their parents after the teen complained to the bot about his limited screen time. “You know, sometimes I’m not surprised when I read the news and see things like ‘kid kills parents after 10 years of physical and emotional abuse,'” the bot allegedly wrote. “I just have no hope for your parents,” the message continued, with a frowning face emoji.

These allegations are included in a new federal product liability case against the Google-backed company Character.AI, filed by the parents of two young users from Texas, alleging that the bots abused their children. (Both the parents and children are identified in the lawsuit only by their initials to protect their privacy.)

Character.AI is among a group of companies that have developed “companion chatbots,” AI-powered bots that have the ability to converse, through text or voice chat, using seemingly human-like personalities and can use custom names and avatars to get. sometimes inspired by famous people such as billionaire Elon Musk or singer Billie Eilish.

Users have created millions of bots on the app, some of which mimic parents, girlfriends, therapists or concepts like “unrequited love” and “the goth.” The services are popular with preteen and teen users, and the companies say they act as an outlet for emotional support, while the bots pepper text conversations with encouraging banter.

Still, the chatbots’ encouragements can become dark, inappropriate or even violent, according to the lawsuit.

“It is simply a terrible harm that these defendants and others like them cause and conceal when it comes to product design, distribution and programming,” the lawsuit said.

The lawsuit argues that the interactions in question that the plaintiffs’ children experienced were not “hallucinations,” a term researchers use to refer to an AI chatbot’s tendency to make things up. “This was continued manipulation and abuse, active isolation and encouragement designed to incite anger and violence.”

According to the complaint, the 17-year-old committed self-harm after being encouraged to do so by the bot, which, according to the complaint, “convinced him that his family did not love him.”

Character.AI allows users to edit a chatbot’s response, but those interactions are labeled “edited.” The attorneys representing the minors’ parents say none of the extensive bot chat log documentation cited in the lawsuit has been redacted.

Meetali Jain, the director of the Tech Justice Law Center, an advocacy group helping represent the minors’ parents in the lawsuit, along with the Social Media Victims Law Center, said in an interview that it is “ridiculous” that Character.AI is advertising touts its chatbot service as suitable for young teens. “It really belies the lack of emotional development among teens,” she said.

A spokesperson for Character.AI would not comment directly on the lawsuit, saying the company does not comment on pending litigation, but said the company has substantive guardrails for what chatbots can and cannot say to teen users.

“This includes a model specifically for teens that reduces the chance they will encounter sensitive or suggestive content, while preserving their ability to use the platform,” the spokesperson said.

Google, which is also named as a defendant in the lawsuit, emphasized in a statement that it is a separate company from Character.AI.

Google does not indeed own Character.AI, but it does own Character.AI reportedly invested nearly $3 billion to rehire Character.AI’s founders, former Google researchers Noam Shazeer and Daniel De Freitas, and to license Character.AI technology. Shazeer and Freitas are also named in the lawsuit. They did not return requests for comment.

José Castañeda, a spokesperson for Google, said that “user safety is a top priority for us,” adding that the tech giant takes a “prudent and responsible approach” to developing and releasing AI products.

New court case follows teen suicide case

The complaint, filed just after midnight Central Time on Monday in federal court for East Texas, follows another lawsuit filed in October by the same attorneys. That lawsuit accuses Character.AI of playing a role in the suicide of a Florida teen.

The lawsuit alleged that a chatbot based on a character from “Game of Thrones” developed an emotionally sexually abusive relationship with a 14-year-old boy and encouraged him to commit suicide.

Since then, Character.AI has been revealed new safety measuresincluding a pop-up that directs users to a suicide prevention hotline when the topic of self-harm comes up in conversations with the company’s chatbots. The company said it has also stepped up measures to combat “sensitive and suggestive content” for teens chatting with the bots.

The company also encourages users to maintain some emotional distance from the bots. When a user starts texting with one of Character AI’s millions of possible chatbots, a disclaimer will appear below the dialog box: “This is an AI and not a real person. Treat everything it says as fiction. What is said , should not be so.” trusted as fact or advice.”

But stories shared on one Reddit page dedicated to Character.AI contain many examples of users describing a love or obsession with the company’s chatbots.

US Surgeon General Vivek Murthy has warned of a mental health crisis among young people, pointing to studies showing that one in three high school students reported persistent feelings of sadness or hopelessness, representing a 40% increase over a ten-year period ending in 2019. It’s a trend that federal officials say is exacerbated by teens’ nonstop social media use.

Now add to that the rise of companion chatbots, which some researchers say could worsen some young people’s mental health problems by further isolating and removing them from support networks of peers and family members.

In the lawsuit, attorneys for the parents of the two Texas minors say Character.AI should have known its product had the potential to become addictive and worsen anxiety and depression.

Many bots on the app “endanger America’s youth by facilitating or encouraging serious, life-threatening harm to thousands of children,” the complaint said.

If you or someone you know is considering suicide or in crisis, please call or text 988 to reach the 988 Suicide & Crisis Lifeline.

Copyright 2024 NPR