So, about your private chats with Meta’s AI? They might not have been as private as you thought. Meta recently fixed a security flaw in its AI platform that quietly allowed users to access other people’s AI prompts, generated content, and the responses generated from them. That is right, someone could have seen your midnight AI brainstorms, your breakup text drafts, or whatever else you have been throwing at the bot. And the only thing standing between your data and a stranger? A guessable string of numbers.
The issue was discovered not by Meta’s engineers. According to TechCrunch, it was discovered by Sandeep Hodkasia, a security researcher and founder of AppSecure. He reported the bug on December 26, 2024, and Meta confirmed it rolled out a fix by January 24, 2025. For his efforts, Hodkasia received a $10,000 bug bounty and the gratitude of anyone who has ever used Meta AI for something they didn’t want made public.
So what was the bug, exactly?
Hodkasia found the vulnerability by poking around Meta AI’s editing feature, which allows logged-in users to tweak their past prompts and regenerate responses. He noticed that each prompt was tied to a unique ID number on Meta’s servers. But here is the kicker: there was no access check to confirm whether the user requesting the data owned the prompt they were trying to view.
That meant a user could change the number in their browser’s network request and get back someone else’s conversation with Meta AI. As Hodkasia put it, the prompt IDs were “easily guessable,” meaning a bad actor could automate the process and potentially scrape huge volumes of user-generated content. Scary? Absolutely.
Related: Meta Targets Spam and ‘Unoriginal’ Facebook Content.
Was anyone affected?
According to Meta, no signs of malicious exploitation were found. That is good news. But it also raises a classic question in tech: if one ethical hacker could discover this hole, who is to say someone less ethical didn’t find it first and stay quiet? Meta has not said whether they logged all the accesses to these prompt IDs before the fix went live.
It is a stark reminder that, as generative AI explodes in popularity, the race to ship new features can sometimes outpace the guardrails needed to keep user data safe.
This is not the first time Meta AI has had a hiccup. When its standalone chatbot app launched earlier this year to compete with rival apps like ChatGPT, some users accidentally made private conversations public, not through a bug, but through poor design. And now this? That is two privacy hiccups before the app’s even had time to catch its breath. Still, credit where it’s due: Meta acted on the bug report fast, patched the issue, and compensated the researcher. Not all tech giants move that quickly, or acknowledge the problem at all.
As Meta, OpenAI, Google, and others race to dominate the AI space, user data is the fuel behind the machine. But what happens when the systems meant to protect that data falter? Whether it is a backend bug or a front-end design flaw, privacy is not just a feature; it is the baseline. Or at least, it should be.
This incident is a reminder that even Big Tech is not immune to small mistakes with big consequences. And while Meta insists no harm was done, the underlying issue remains: when your AI platform is powered by personal input, you better make sure that input stays locked down. Meta says your AI convos are safe now, but in the age of guessable prompt IDs and rushed rollouts, “private” should never just be a setting.