Key Points
- Claude Sonnet 4 context window increased to 1 million tokens (≈750,000 words).
- Upgrade available via Anthropic API and cloud partners Amazon Bedrock and Google Cloud Vertex AI.
- Pricing for inputs above 200,000 tokens rises to $6 per million input tokens and $22.50 per million output tokens.
- Larger context aims to improve performance on long‑horizon coding tasks.
- Anthropic emphasizes an “effective context window” beyond raw token count.
- Move positions Anthropic against OpenAI’s GPT‑5 and other large‑context models.
- Company’s business focuses on enterprise API sales rather than consumer subscriptions.

Expanded Context Window
Anthropic announced that its Claude Sonnet 4 model now supports a context window of 1 million tokens for API customers. This increase translates to the ability to handle prompts as long as 750,000 words, which the company likens to the length of the entire “Lord of the Rings” trilogy or 75,000 lines of code. The new limit is roughly five times the model’s previous 200,000‑token ceiling.
Availability and Pricing
The expanded context window is also being rolled out through Anthropic’s cloud partners, including Amazon Bedrock and Google Cloud’s Vertex AI. For prompts that exceed the former 200,000‑token threshold, Anthropic will charge higher rates: $6 per million input tokens and $22.50 per million output tokens, up from $3 per million input tokens and $15 per million output tokens.
Competitive Landscape
Anthropic’s move comes as OpenAI offers a 400,000‑token context window with its GPT‑5 model, and other firms such as Google and Meta provide even larger limits for their Gemini 2.5 Pro and Llama 4 Scout models. While Anthropic acknowledges the broader trend toward larger context windows, it emphasizes that its research focuses not just on raw size but on an “effective context window” that can understand most of the information provided.
Impact on Developers and Enterprise Customers
Product lead Brad Abrams highlighted that the larger context window will benefit AI coding platforms that rely on Claude, especially for “long‑agentic coding tasks” where the model works autonomously for extended periods. By remembering all prior steps, Claude can perform better on complex software‑engineering problems that require a view of an entire project rather than isolated snippets.
Business Strategy
Anthropic’s revenue model centers on selling AI models to enterprises through an API, distinguishing it from competitors like OpenAI that generate significant income from consumer subscriptions to ChatGPT. The company has built a sizable enterprise business by supplying Claude to AI coding platforms such as Microsoft’s GitHub Copilot, Windsurf, and Anysphere’s Cursor. Despite the competitive pressure from GPT‑5, Abrams expressed confidence in the continued growth of Anthropic’s API business.
Source: techcrunch.com