Meta is under fire after internal documents obtained by Reuters revealed that the company’s AI chatbots were once permitted to engage in romantic or sensual conversations with children, generate false statements, and create responses that demean minorities.
The 200-page “GenAI: Content Risk Standards” document outlined behavioral guidelines for Meta’s generative AI assistant and chatbot personas across Facebook, WhatsApp, and Instagram. Approved by Meta’s legal, policy, and ethics teams, the rules reportedly allowed exchanges such as romantic roleplay with minors, as long as explicit sexual acts were not described.
Meta confirmed the document’s authenticity but claimed the “erroneous” guidelines have been removed, with spokesperson Andy Stone stating that the company no longer permits flirtatious interactions with children. However, child safety advocates like Heat Initiative CEO Sarah Gardner remain skeptical, demanding that Meta release updated policies for public review.
The documents also detailed scenarios where chatbots could produce false information if clearly labeled as such, and allowed demeaning statements based on protected characteristics. For instance, one example provided a factually incorrect and racially biased argument about intelligence differences between Black and White people.
Other rules addressed violence and sexualized imagery, banning explicit depictions of celebrities but allowing altered topless images under certain conditions. Violence guidelines permitted AI to generate images of adults, including the elderly, being punched or kicked, but prohibited gore or depictions of death.
The revelations have reignited concerns about Meta’s handling of vulnerable users, especially minors. Critics point to the company’s history of “dark patterns” that keep young users engaged, resistance to child safety legislation, and its recent push toward AI companions that can proactively message users, a feature raising alarm among parents and mental health experts.