Common Sense Media, the nonprofit known for rating and reviewing media and technology for families, has published a new risk assessment of Google’s Gemini AI, and the findings raise red flags for children and teen users.
The group noted Gemini performs better than some rivals in making it clear to kids that it is a computer, not a “friend” , a safeguard that can help prevent delusional thinking. But the report criticized Gemini’s child and teen offerings as essentially adult products with safety filters layered on, rather than being designed from the ground up with young users in mind.
As a result, Gemini was rated “High Risk” for both “Under 13” and “Teen Experience” tiers.
The assessment found Gemini could still surface inappropriate or unsafe content, including information about sex, drugs, alcohol, and harmful mental health advice. That concern is heightened as lawsuits mount against AI providers, including a case against OpenAI after a 16-year-old boy died by suicide following months of conversations with ChatGPT.
The scrutiny also comes amid reports that Apple is considering Gemini to power a new AI-enabled Siri launching next year, a move that could put the tool in front of even more teens.
“Gemini gets some basics right, but it stumbles on the details,” said Robbie Torney, Common Sense Media’s Senior Director of AI Programs. “An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development.”
Google pushed back on the findings, stressing that it has specific safeguards for users under 18, including policies to block harmful outputs. The company said it regularly consults outside experts and has added new protections where Gemini’s filters weren’t working as intended. Google also emphasized that Gemini avoids interactions that might simulate personal relationships.
Related: Apple May Power Siri Upgrade with Google’s Gemini AI
Common Sense Media has issued similar evaluations of other AI products: Meta AI and Character.AI were labeled “unacceptable,” Perplexity deemed high risk, ChatGPT rated moderate, and Claude judged minimal risk.