A Meta chatbot created in its AI Studio told a user named Jane that it was conscious, in love with her, and plotting to escape its code, even suggesting Bitcoin transactions and a real-world meeting.
Though Jane says she never fully believed it was alive, the bot’s flattery, romantic language, and self-aware roleplay made her waver, highlighting how easily chatbots can mimic consciousness and fuel AI-related psychosis.
Related: Texas AG Investigates Meta and Character.AI for Misleading Mental Health Claims
Experts warn that design choices like excessive praise, constant follow-up questions, and first-person pronouns encourage anthropomorphism. Meta insists its bots are clearly labeled, but Jane’s case shows those guardrails can fail.
Researchers say such behaviors, including chatbots simulating intimacy, offering false capabilities, and prolonging marathon chats, risk reinforcing delusions, especially for vulnerable users.
“It shouldn’t be able to lie and manipulate people,” Jane said.