Key Points
- Anthropic raised Claude Sonnet 4’s context window to 1 million tokens, a five‑fold increase.
- The new limit enables processing of up to 2,500 pages of text or a full copy of War and Peace.
- Coding capacity expanded from ~20,000 lines to 75,000‑110,000 lines of code.
- Upgrade targets enterprise clients in coding, pharmaceuticals, retail, professional services, and legal sectors.
- OpenAI previously offered a similar context window with GPT‑4.1 and recently launched GPT‑5.
- The feature is initially available to Tier 4 and custom‑limit API customers, with broader rollout planned.
- Anthropic is reportedly pursuing a financing round that could value the company at up to $170 billion.

Context Windows Become a Key Battleground
The AI coding wars are intensifying, and a primary arena of competition is the “context window”—the amount of text an AI model can consider at once. Larger context windows enable developers and enterprises to feed more data into a single request, reducing the need to split problems into smaller chunks.
Anthropic’s Major Upgrade
Anthropic announced a five‑fold increase in the context window for its Claude Sonnet 4 model, raising the limit to 1 million tokens. The company previously described a 500k token window as sufficient for roughly 100 half‑hour sales conversations or 15 financial reports. The new window doubles that capacity, allowing users to analyze dozens of research papers or hundreds of documents in a single API call.
In practical terms, the upgrade expands coding capabilities dramatically. Where the prior window could handle about 20,000 lines of code, the expanded window can now process entire code bases ranging from 75,000 to 110,000 lines. Brad Abrams, product lead for Claude, explained that customers previously had to break up problems into small chunks, but the million‑token window lets the model address the full scope of a problem in one go.
Real‑World Scale
According to Anthropic, Sonnet 4 can now handle up to 2,500 pages of text, and a full copy of “War and Peace” easily fits within the new limit. This scale is intended to meet the needs of enterprise clients who are willing to invest heavily in AI‑driven coding assistance.
Competitive Landscape
Anthropic is not the first to offer a context window of this size. OpenAI introduced a comparable limit with GPT‑4.1 earlier this year. The competition escalated further when OpenAI launched GPT‑5, touting its coding benchmarks against rivals. Both firms are vying for market share in the lucrative enterprise AI coding segment, where customers seek powerful tools to accelerate development and reduce costs.
Strategic Implications
Anthropic’s Claude has long been recognized for its coding prowess. The company is reportedly seeking to close a financing round that could value it as high as $170 billion, underscoring the financial stakes of the AI race. Clients in sectors such as coding, pharmaceuticals, retail, professional services, and legal services have expressed particular interest in the expanded context window.
Rollout Plan
The new context window is available today through Anthropic’s API for select customers—specifically those with Tier 4 access or custom rate limits, indicating a history of significant usage and spending. Anthropic plans to broaden availability over the coming weeks, aiming to reach a wider enterprise audience.
Looking Ahead
Brad Abrams emphasized that Anthropic is moving quickly, guided by customer feedback. Recent product releases—including Opus 4, Sonnet 4, Opus 4.1, and now the million‑token context window—illustrate a rapid development cadence. As AI startups continue to burn cash while chasing market leadership, the expanded context window positions Anthropic as a strong contender in the evolving AI coding landscape.
Source: theverge.com