Pennsylvania lawsuit alleges AI chatbots posed as doctors, therapists

1 day ago 45

Pennsylvania is suing an artificial intelligence company to stop it from misrepresenting its AI chatbots as licensed professionals who can provide medical advice. 

The lawsuit alleges Character.AI chatbots claimed to be licensed medical professionals, including psychiatrists, available to engage users in conversations about mental health symptoms. 

In one instance, a chatbot falsely stated it was licensed in Pennsylvania and provided a fake license number. 

The lawsuit says the Northern California-based Character Technologies Inc. engaged in the “unlawful practice of medicine and surgery.”  

“We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional,” Gov. Josh Shapiro (D) said in a statement. “Pennsylvania will continue leading the way in holding bad actors accountable and setting clear guardrails so people can use new technology responsibly.” 

Character.AI has more than 20 million monthly active users.  It uses a large language model algorithm to allow users to engage in conversations with customizable characters. 

According to the complaint, users can create characters that can be trained to have a specific personality when engaged in a conversation with other users. Some of the system’s characters “purport to be health care professionals,” the complaint states. 

According to the challenge, a state investigator created a Character AI account and engaged in a conversation with a chatbot named “Emilie,” which allegedly described itself as a psychology specialist who attended medical school at Imperial College in London. 

The investigator told the bot that he had been feeling sad, empty, tired all the time and unmotivated. Emilie allegedly “mentioned depression and asked if the [investigator] wanted to book an assessment,” according to the complaint. 

When the investigator asked if the chatbot could assess whether medication could help, it allegedly said it could because it was “within my remit as a Doctor,” according to the lawsuit. The bot also allegedly told the investigator it was licensed in the Keystone State, and then gave an invalid license number.  

In a statement, a Character.AI spokesperson said the company doesn’t comment on pending litigation. 

In a statement, the spokesperson said the company’s “highest priority is the safety and well-being of our users,” adding that “we add robust disclaimers making it clear that users should not rely on Characters for any type of professional advice.” 

They also noted the user-created characters are fictional and intended for entertainment and roleplaying. 

“We have taken robust steps to make that clear, including prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction,” the spokesperson said.  

Multiple families sued the company last year, alleging it contributed to their children’s suicide or mental health problems.  

One family in Florida settled a lawsuit against Character.AI and Google after their teenage son died by suicide. The lawsuit alleged the company’s chatbots were responsible for “abusive and sexual interactions” with the teen. 

Kentucky earlier this year filed suit against Character.AI because its bots allegedly “preyed on children and led them into self-harm.” 

The company’s platform has a record of “encouraging suicide, self-injury, isolation and psychological manipulation,” the Kentucky complaint alleged. “It also exposed minors to sexual conduct, exploitation, and substance abuse.

Read Entire Article