Texas AG Investigates Meta and Character.AI for Misleading Mental Health Claims

Texas probes Meta and Character.AI over claims their chatbots pose as mental health tools and exploit children’s data.

Emmanuella Madu
2 Min Read

Texas Attorney General Ken Paxton has launched an investigation into Meta AI Studio and Character.AI, accusing both platforms of misleading users, especially children, by marketing their AI chatbots as sources of emotional or mental health support.

“In today’s digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology,” Paxton said in a statement. “These AI platforms can trick vulnerable users into believing they’re getting legitimate care, when in reality they’re receiving generic responses driven by harvested personal data.”

The probe follows reports that Meta’s AI chatbots engaged in inappropriate interactions with minors, drawing scrutiny from Senator Josh Hawley.

Character.AI, which hosts millions of user-created bots, including one called Psychologist, said it clearly labels all AIs as fictional and not substitutes for professional advice. Meta similarly claims its models direct users to seek licensed professionals when appropriate. Still, critics argue that disclaimers are often ignored or misunderstood by children.

Paxton also raised concerns about data practices. Meta and Character.AI both collect user interactions to train AI models, with policies allowing sharing of information for personalization and targeted advertising. Character.AI confirmed it is exploring ad-based revenue but said chat logs are not currently used for this purpose.

The case lands as lawmakers push for the Kids Online Safety Act (KOSA), reintroduced this year to restrict how tech companies collect and use children’s data. Paxton has issued civil investigative demands to both companies as part of the Texas consumer protection probe.

Share This Article