Anthropic has officially revoked OpenAI’s access to its Claude models, after learning the ChatGPT-maker may have been quietly using Claude for… a little benchmarking, a little code testing, and possibly some “let us see what their bot can do before we launch ours” behavior.
According to Wired, OpenAI engineers were reportedly hooking Claude up to internal tools to measure how it stacks up against their models, especially in areas like writing, coding, and safety, right as GPT-5 was in the oven. That did not sit well with Anthropic. In a statement to TechCrunch, a company spokesperson did not mince words, calling it a “direct violation” of their terms of service. Translation: You cannot use Claude to help build your own Claude competitor.
That is not a footnote, either; it is right there in Anthropic has rules: no using Claude to develop competing services. Still, Anthropic is not slamming the door completely. They say OpenAI can maintain access for safety testing and benchmarking, just, you know, not the kind that helps fine-tune your shiny new GPT drop.
Related: Anthropic Set To Join AI Elite With $170Billion Valuation.
As for OpenAI, they are not thrilled but are trying to take the high road. “Disappointing,” said a spokesperson, noting that their API is still available to Anthropic. They also called their Claude usage “industry standard,” which, fair, but also, rules are rules.
Anthropic has been cautious about giving competitors the keys to its kingdom. When Windsurf, a rumored OpenAI acquisition target (now officially scooped up by Cognition), tried to use Claude, Anthropic’s Chief Science Officer, Jared Kaplan, reportedly said: “It would be odd for us to be selling Claude to OpenAI.”
So yeah, the vibes have been icy for a while. Forget friendly competition, this is AI Cold War energy. As the race to build the smartest bot heats up, one thing’s clear: the biggest brains in the room are keeping their toys to themselves. What happens when collaboration becomes a liability?