Anthropic is giving enterprise customers a massive boost in AI capacity, increasing the Claude Sonnet 4 model’s context window to 1 million tokens, roughly 750,000 words or 75,000 lines of code. That’s five times the previous 200,000-token limit and more than double the 400,000 tokens offered by OpenAI’s GPT-5.
The expanded limit is now available for API customers and through cloud partners like Amazon Bedrock and Google Cloud’s Vertex AI. Anthropic’s product lead, Brad Abrams, says the update will deliver “a lot of benefit” to AI coding platforms, one of the company’s biggest customer segments, including GitHub Copilot, Windsurf, and Cursor.
The move comes as GPT-5 gains traction with competitive pricing and strong coding performance. While OpenAI earns most of its revenue from ChatGPT subscriptions, Anthropic’s business relies on selling AI models to enterprises via API, making developer loyalty critical.
Larger context windows can significantly improve performance on complex software engineering and “agentic” coding tasks, where AI works autonomously over extended periods. Anthropic claims it has not only increased the raw context size but also its “effective context window,” allowing Claude to process more of the data it receives.
However, massive context windows aren’t unique to Anthropic, Google’s Gemini 2.5 Pro offers 2 million tokens, and Meta’s Llama 4 Scout reaches 10 million. Some research suggests these extremes have diminishing returns, but Abrams says Claude is designed to understand most of the information it’s given.
Pricing will increase for API requests exceeding 200,000 tokens: $6 per million input tokens and $22.50 per million output tokens, up from $3 and $15 respectively.