Anthropic just made a big move to strengthen its enterprise AI game by scooping up the co-founders and most of the team behind Humanloop, a startup known for its tools that help companies safely and effectively run AI at scale.
The terms of the deal were not disclosed, but this looks like a classic acqui-hire, the kind of talent grab becoming increasingly common in the AI talent war. Humanloop’s three co-founders, Raza Habib (CEO), Peter Hayes (CTO), and Jordan Burgess (CPO), are all now at Anthropic, along with roughly a dozen engineers and researchers.
Anthropic didn’t buy Humanloop’s assets or IP, but in AI, the real intellectual property lives in people’s heads. And the Humanloop crew is bringing exactly the kind of expertise Anthropic needs: enterprise-grade prompt management, model evaluation, and AI observability.
“Their proven experience in AI tooling and evaluation will be invaluable as we continue to advance our work in AI safety and building useful AI systems,” said Brad Abrams, Anthropic’s API product lead.
The move comes as Anthropic pushes to lead in agentic (autonomous agent) and coding capabilities for enterprises, areas where model quality alone isn’t enough. Owning the tooling layer could help it outpace OpenAI and Google DeepMind in performance and enterprise readiness.
Founded in 2020 as a spinout from University College London, Humanloop quickly earned a reputation for making AI safer and more reliable for enterprise clients like Duolingo, Gusto, and Vanta. Backed by Y Combinator and Index Ventures, the company raised $7.91 million in seed funding before deciding to shut down operations last month in preparation for this acquisition.
Related: Anthropic Expands Claude Sonnet 4’s Context Window to 1 Million Tokens
The timing is telling. Just this week, Anthropic struck a deal with the U.S. government’s central purchasing arm to provide AI services to agencies across all branches, for $1 per agency for the first year. That’s a direct play to undercut OpenAI’s similar offer, and government buyers demand the kind of evaluation, monitoring, and compliance tools Humanloop was known for.
The alignment isn’t just strategic; it is cultural. Anthropic brands itself as a “safety-first” AI company, and Humanloop’s evaluation workflows, from performance tracking to bias mitigation, fit perfectly into that narrative.
“From our earliest days, we’ve been focused on creating tools that help developers build AI applications safely and effectively,” said Habib. “Anthropic’s commitment to AI safety research and responsible AI development perfectly aligns with our vision.”
With this move, Anthropic is not just betting on better AI models; it is betting on the infrastructure, oversight, and human expertise that will decide which AI companies stay trusted partners to the biggest customers in the world. The question now: in a race for AI dominance, will the winners be those who build the smartest models, or those who make them safest to use?