Amazon and Anthropic have expanded their strategic collaboration, combining large-scale infrastructure investment, custom silicon deployment and deeper platform integration to accelerate global adoption of generative AI.
Strategic collaboration scales global AI capacity, custom silicon adoption and enterprise access
The agreement includes a commitment from Anthropic to spend more than $100 billion over the next decade on Amazon Web Services (AWS) technologies, alongside Amazon’s plan to invest $5 billion immediately and up to an additional $20 billion in the future, tied to commercial milestones.
At the core of the partnership is a significant expansion of compute capacity. Anthropic will secure up to 5 gigawatts (GW) of infrastructure capacity using Amazon’s Trainium chips to train and power advanced AI models.
The collaboration spans multiple generations of custom silicon, including Trainium2, Trainium3 and future iterations, alongside tens of millions of Graviton CPU cores. This is designed to deliver improved price performance and scalability for large-scale AI workloads.
The companies are also continuing joint development efforts through Project Rainier—one of the world’s largest AI compute clusters—which uses nearly half a million Trainium2 chips to train and deploy Anthropic’s Claude models.
Expanding global AI deployment across cloud ecosystems
The partnership also strengthens international reach, with expanded inference capabilities planned across Asia and Europe to support growing enterprise demand.
Anthropic’s Claude Platform will now be available directly within AWS, allowing customers to access the full developer experience through existing AWS accounts without additional credentials or billing relationships. This complements availability via Amazon Bedrock, giving enterprises flexibility in how they deploy and scale AI applications.
More than 100,000 customers are already running Claude models on AWS, making it one of the most widely adopted model families on the platform.
Driving enterprise AI adoption across industries
The collaboration is focused on enabling organisations to build, deploy and scale AI applications more efficiently across sectors.
Andy Jassy, CEO of Amazon, said:
“Our custom AI silicon offers high performance at significantly lower cost for customers, which is why it’s in such hot demand. Anthropic’s commitment to run its large language models on AWS Trainium for the next decade reflects the progress we’ve made together on custom silicon, as we continue delivering the technology and infrastructure our customers need to build with generative AI.”

Dario Amodei, CEO and co-founder of Anthropic, added:
“Our users tell us Claude is increasingly essential to how they work, and we need to build the infrastructure to keep pace with rapidly growing demand. Our collaboration with Amazon will allow us to continue advancing AI research while delivering Claude to our customers, including the more than 100,000 building on AWS.”

Strengthening engineering collaboration and innovation pipeline
Beyond infrastructure, the partnership includes close engineering collaboration between Anthropic and Amazon’s Annapurna Labs to optimise future generations of Trainium chips. This includes direct feedback from AI training workloads to inform chip design and performance improvements.
Anthropic continues to use AWS as its primary training and cloud provider for mission-critical workloads, while Amazon developers are also leveraging Claude models to enhance customer-facing services across its ecosystem.


