[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-anthropic-amazon-5gw-compute-claude-en":3,"tags-anthropic-amazon-5gw-compute-claude-en":29,"related-lang-anthropic-amazon-5gw-compute-claude-en":30,"related-posts-anthropic-amazon-5gw-compute-claude-en":34,"series-industry-f6d0d13e-085c-458c-8d9f-255b7f1edf92":71},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":18,"translated_content":10,"views":19,"is_premium":20,"created_at":21,"updated_at":21,"cover_image":11,"published_at":22,"rewrite_status":23,"rewrite_error":10,"rewritten_from_id":24,"slug":25,"category":26,"related_article_id":27,"status":28,"google_indexed_at":10,"x_posted_at":10},"f6d0d13e-085c-458c-8d9f-255b7f1edf92","Anthropic and Amazon lock in 5GW for Claude","\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fnews\u002Fanthropic-amazon-compute\" target=\"_blank\" rel=\"noopener\">Anthropic\u003C\u002Fa> just put a number on its hunger for compute that is hard to ignore: up to 5 gigawatts of new capacity with \u003Ca href=\"https:\u002F\u002Faws.amazon.com\u002F\" target=\"_blank\" rel=\"noopener\">Amazon Web Services\u003C\u002Fa>. The company says the agreement covers training and serving \u003Ca href=\"https:\u002F\u002Fclaude.ai\u002F\" target=\"_blank\" rel=\"noopener\">Claude\u003C\u002Fa>, with new \u003Ca href=\"https:\u002F\u002Faws.amazon.com\u002Fmachine-learning\u002Ftrainium\u002F\" target=\"_blank\" rel=\"noopener\">Trainium\u003C\u002Fa> capacity arriving in 2026 and a long-term commitment of more than $100 billion over the next decade.\u003C\u002Fp>\u003Cp>That is a giant bet on scale. It also tells you something simple: the race in frontier AI is no longer just about model quality. It is about who can secure enough chips, power, and cloud capacity to keep the models running when millions of users show up at once.\u003C\u002Fp>\u003Ch2>What Anthropic and Amazon actually signed\u003C\u002Fh2>\u003Cp>The headline number is 5GW, but the details matter more. Anthropic says the deal deepens a partnership that started in 2023 and extends its use of AWS as its primary training and cloud provider for mission-critical workloads. The company also says it already uses more than one million \u003Ca href=\"https:\u002F\u002Faws.amazon.com\u002Fmachine-learning\u002Ftrainium\u002F\" target=\"_blank\" rel=\"noopener\">Trainium2\u003C\u002Fa> chips to train and serve Claude.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777032236218-f84t.png\" alt=\"Anthropic and Amazon lock in 5GW for Claude\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>Amazon is putting $5 billion into Anthropic now, with up to another $20 billion possible later. Anthropic says the broader commitment to AWS technologies exceeds $100 billion over ten years. That is not pocket change even by hyperscaler standards, and it signals that both companies expect Claude demand to keep climbing fast.\u003C\u002Fp>\u003Cp>Anthropic also says the agreement will add incremental capacity inside \u003Ca href=\"https:\u002F\u002Faws.amazon.com\u002Fbedrock\u002Fanthropic\u002F\" target=\"_blank\" rel=\"noopener\">Amazon Bedrock\u003C\u002Fa>, where more than 100,000 customers already run Claude. The company says new Trainium2 capacity should come online in the first half of 2026, with nearly 1GW of Trainium2 and Trainium3 capacity expected by the end of the year.\u003C\u002Fp>\u003Cul>\u003Cli>Up to 5GW of new compute for Claude\u003C\u002Fli>\u003Cli>More than $100 billion committed to AWS technologies over 10 years\u003C\u002Fli>\u003Cli>$5 billion from Amazon now, with up to $20 billion more later\u003C\u002Fli>\u003Cli>Over one million Trainium2 chips already in use\u003C\u002Fli>\u003Cli>More than 100,000 customers using Claude on Bedrock\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>Why 5 gigawatts matters more than it sounds\u003C\u002Fh2>\u003Cp>Five gigawatts is a power figure, but in AI it translates into something more practical: how many model training runs you can keep going, how much inference you can absorb, and how much downtime you can avoid when demand spikes. Anthropic says its consumer usage has jumped sharply across free, Pro, Max, and Team plans, and that reliability has taken a hit during peak hours.\u003C\u002Fp>\u003Cp>The company’s run-rate revenue has crossed $30 billion, up from about $9 billion at the end of 2025. That is a huge jump in a short time, and it explains \u003Ca href=\"\u002Fnews\u002Fanthropic-800b-valuation-funding\">why Anthropic\u003C\u002Fa> is treating infrastructure like a first-class product problem rather than a back-office expense.\u003C\u002Fp>\u003Cp>There is also a hardware strategy hidden inside the deal. Anthropic says the agreement spans \u003Ca href=\"https:\u002F\u002Faws.amazon.com\u002Fec2\u002Fgraviton\u002F\" target=\"_blank\" rel=\"noopener\">Graviton\u003C\u002Fa> and Trainium2 through Trainium4, with the option to buy future generations of Amazon custom silicon. In plain English: Anthropic is betting that AWS silicon can give it enough performance and lower cost to keep scaling without relying only on the most expensive general-purpose chips.\u003C\u002Fp>\u003Cul>\u003Cli>Anthropic says consumer reliability suffered during peak hours\u003C\u002Fli>\u003Cli>Run-rate revenue moved from about $9 billion to over $30 billion in months\u003C\u002Fli>\u003Cli>Capacity is expected within three months, not years\u003C\u002Fli>\u003Cli>Inference expansion is planned for Asia and Europe\u003C\u002Fli>\u003Cli>Workloads are spread across multiple chip families, not a single vendor\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>What Andy Jassy and Dario Amodei are signaling\u003C\u002Fh2>\u003Cp>Amazon CEO \u003Ca href=\"https:\u002F\u002Fwww.aboutamazon.com\u002Fnews\u002Fcompany-news\u002Fandy-jassy\" target=\"_blank\" rel=\"noopener\">Andy Jassy\u003C\u002Fa> framed the deal around custom silicon economics. “Our custom AI silicon offers high performance at significantly lower cost for customers, which is why it’s in such hot demand,” he said. “Anthropic's commitment to run its large language models on AWS Trainium for the next decade reflects the progress we've made together on custom silicon, as we continue delivering the technology and infrastructure our customers need to build with generative AI.”\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777032237024-yakh.png\" alt=\"Anthropic and Amazon lock in 5GW for Claude\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>That quote is doing a lot of work. Jassy is not just talking about capacity; he is making the case that AWS silicon is now good enough to anchor a top-tier frontier model company. For Amazon, that is a strong sales pitch for Trainium, Inferentia, and the rest of its AI stack.\u003C\u002Fp>\u003Cblockquote>“Our users tell us Claude is increasingly essential to how they work, and we need to build the infrastructure to keep pace with rapidly growing demand,” said \u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fteam\" target=\"_blank\" rel=\"noopener\">Dario Amodei\u003C\u002Fa>, CEO and co-founder of Anthropic.\u003C\u002Fblockquote>\u003Cp>Amodei’s comment is the clearest read on the deal. Anthropic is not treating this as a nice-to-have cloud expansion. It is treating compute as the thing that decides whether Claude can stay reliable while usage keeps rising across consumer and enterprise products.\u003C\u002Fp>\u003Cp>He also said the collaboration will let Anthropic keep advancing AI research while serving the more than 100,000 customers already building on AWS. That matters because the company is trying to do two expensive things at once: push model capability forward and keep the product dependable under heavy load.\u003C\u002Fp>\u003Ch2>How this compares with the rest of the AI stack\u003C\u002Fh2>\u003Cp>The scale of this agreement puts Anthropic in the same conversation as the biggest infrastructure buyers in AI. It also shows how cloud partnerships are becoming more strategic than simple vendor relationships. Anthropic says Claude is the only frontier model available on all three major cloud platforms: AWS, \u003Ca href=\"https:\u002F\u002Fcloud.google.com\u002Fvertex-ai\" target=\"_blank\" rel=\"noopener\">Google Cloud Vertex AI\u003C\u002Fa>, and \u003Ca href=\"https:\u002F\u002Fazure.microsoft.com\u002Fen-us\u002Fproducts\u002Fai-services\u002F\" target=\"_blank\" rel=\"noopener\">Microsoft Azure\u003C\u002Fa>.\u003C\u002Fp>\u003Cp>That multi-cloud reach is useful for customers, but it is also a hedge for Anthropic. If one cloud gets tight on capacity, pricing, or policy, the company has other paths. Still, AWS remains the primary provider for the heaviest workloads, and this deal makes that relationship even tighter.\u003C\u002Fp>\u003Cp>Here is the comparison that jumps out:\u003C\u002Fp>\u003Cul>\u003Cli>\u003Ca href=\"https:\u002F\u002Fopenai.com\u002F\" target=\"_blank\" rel=\"noopener\">OpenAI\u003C\u002Fa> has leaned heavily on Microsoft infrastructure, while Anthropic is spreading across AWS, Google Cloud, and Azure\u003C\u002Fli>\u003Cli>Anthropic says more than 100,000 customers already use Claude on Bedrock, which gives AWS a direct enterprise channel\u003C\u002Fli>\u003Cli>The 5GW commitment is large enough to matter over years, not just quarters\u003C\u002Fli>\u003Cli>Amazon’s $5 billion immediate investment is paired with a possible $20 billion follow-on, which gives the deal financial depth\u003C\u002Fli>\u003Cli>Trainium adoption suggests Anthropic is willing to trade some hardware flexibility for lower-cost scale\u003C\u002Fli>\u003C\u002Ful>\u003Cp>There is a business angle here too. If AWS can keep a flagship model like Claude running on its own chips, it strengthens the case that custom silicon can compete with the default choice of Nvidia-heavy deployments. That could influence how other model makers plan their next infrastructure purchases.\u003C\u002Fp>\u003Cp>For developers, the more immediate effect is simpler: more Claude capacity, fewer peak-hour slowdowns, and likely better access to the Claude Platform inside AWS. Anthropic says that full platform experience will be available directly within AWS with the same account, controls, and billing, which is exactly the sort of thing enterprise teams ask for when they do not want another toolchain to manage.\u003C\u002Fp>\u003Ch2>What this means for Claude users and the AI market\u003C\u002Fh2>\u003Cp>This deal is a sign that Anthropic expects demand to keep rising faster than its current infrastructure can absorb. The company says it will get meaningful compute in the next three months and nearly 1GW by the end of the year, which is fast by cloud-infrastructure standards and even faster by AI-model standards.\u003C\u002Fp>\u003Cp>My read is that this is less about bragging rights and more about survival at scale. A model can be technically strong and still feel bad to use if latency climbs, rate limits get tighter, or reliability drops during peak hours. Anthropic is buying its way out of that problem before it gets worse.\u003C\u002Fp>\u003Cp>The bigger question is whether the rest of the market follows this pattern. If frontier model companies need multi-gigawatt deals and decade-long cloud commitments to stay competitive, the cost of staying in the race keeps rising. That will favor the players with deep capital, preferred cloud access, and enough enterprise demand to justify the spend.\u003C\u002Fp>\u003Cp>For now, the takeaway is straightforward: Claude is becoming a much bigger infrastructure story than a chatbot story. If Anthropic can turn this 5GW agreement into steadier performance and faster feature rollout, the next thing to watch is whether other model labs start signing similarly massive power-and-compute deals.\u003C\u002Fp>","Anthropic and Amazon signed a bigger AWS deal: up to 5GW for Claude, $100B+ over 10 years, and 100,000+ AWS customers already using it.","www.anthropic.com","https:\u002F\u002Fwww.anthropic.com\u002Fnews\u002Fanthropic-amazon-compute",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777032236218-f84t.png",[13,14,15,16,17],"Anthropic","Amazon Web Services","Claude","Trainium","AI infrastructure","en",0,false,"2026-04-24T12:03:39.90241+00:00","2026-04-24T12:03:39.849+00:00","done","f2f3e1eb-b21f-40b1-b8df-783959d5bd75","anthropic-amazon-5gw-compute-claude-en","industry","259a408a-3045-47ef-ae32-1a2d7b76233e","published",[],{"id":27,"slug":31,"title":32,"language":33},"anthropic-amazon-5gw-compute-claude-zh","Anthropic 與 Amazon 鎖定 5GW 算力","zh",[35,41,47,53,59,65],{"id":36,"slug":37,"title":38,"cover_image":39,"image_url":39,"created_at":40,"category":26},"02c82202-7a43-4483-9f8d-1ace9ced36a3","why-gpt-image-2-matters-more-than-another-ai-image-launch-en","Why GPT Image 2 Matters More Than Another AI Image Launch","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777032939081-mn42.png","2026-04-24T12:15:23.99005+00:00",{"id":42,"slug":43,"title":44,"cover_image":45,"image_url":45,"created_at":46,"category":26},"531324ce-94c0-4372-b74d-ba3a6783e266","why-enterprises-should-stop-treating-codex-like-a-pilot-proj-en","Why enterprises should stop treating Codex like a pilot project","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776989564743-q658.png","2026-04-24T00:12:24.676726+00:00",{"id":48,"slug":49,"title":50,"cover_image":51,"image_url":51,"created_at":52,"category":26},"d03222eb-5925-4664-981c-ada6e9100534","why-the-mythos-rollout-is-a-mistake-en","Why the Mythos rollout is a mistake","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776989193996-gtc2.png","2026-04-24T00:06:18.997407+00:00",{"id":54,"slug":55,"title":56,"cover_image":57,"image_url":57,"created_at":58,"category":26},"7a3c1749-eff7-41fc-ab26-41b9c664aac9","openai-cyber-tool-five-eyes-briefings-en","OpenAI’s new cyber tool reaches Five Eyes","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776946014856-fytc.png","2026-04-23T12:06:38.863311+00:00",{"id":60,"slug":61,"title":62,"cover_image":63,"image_url":63,"created_at":64,"category":26},"e6eb0120-8a86-4d36-a3ce-9a21984b736f","why-mythos-ai-is-a-real-cybersecurity-threat-en","Why Mythos AI Is a Real Cybersecurity Threat","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776945825125-1pvp.png","2026-04-23T12:03:23.626641+00:00",{"id":66,"slug":67,"title":68,"cover_image":69,"image_url":69,"created_at":70,"category":26},"7178dcc5-8367-4af2-93d3-94a8267b9613","florida-criminal-probe-openai-chatgpt-en","Florida Opens Criminal Probe Into OpenAI","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776902814102-1318.png","2026-04-23T00:06:38.049851+00:00",[72,77,82,87,92,97,102,107,112,117],{"id":73,"slug":74,"title":75,"created_at":76},"d35a1bd9-e709-412e-a2df-392df1dc572a","ai-impact-2026-developments-market-en","AI's Impact in 2026: Key Developments and Market Shifts","2026-03-25T16:20:33.205823+00:00",{"id":78,"slug":79,"title":80,"created_at":81},"5ed27921-5fd6-492e-8c59-78393bf37710","trumps-ai-legislative-framework-en","Trump's AI Legislative Framework: What's Inside?","2026-03-25T16:22:20.005325+00:00",{"id":83,"slug":84,"title":85,"created_at":86},"e454a642-f03c-4794-b185-5f651aebbaca","nvidia-gtc-2026-key-highlights-innovations-en","NVIDIA GTC 2026: Key Highlights and Innovations","2026-03-25T16:22:47.882615+00:00",{"id":88,"slug":89,"title":90,"created_at":91},"0ebb5b16-774a-4922-945d-5f2ce1df5a6d","claude-usage-diversifies-learning-curves-en","Claude Usage Diversifies, Learning Curves Emerge","2026-03-25T16:25:50.770376+00:00",{"id":93,"slug":94,"title":95,"created_at":96},"69934e86-2fc5-4280-8223-7b917a48ace8","openclaw-ai-commoditization-concerns-en","OpenClaw's Rise Raises Concerns of AI Model Commoditization","2026-03-25T16:26:30.582047+00:00",{"id":98,"slug":99,"title":100,"created_at":101},"b4b2575b-2ac8-46b2-b90e-ab1d7c060797","google-gemini-ai-rollout-2026-en","Google's Gemini AI Rollout Extended to 2026","2026-03-25T16:28:14.808842+00:00",{"id":103,"slug":104,"title":105,"created_at":106},"6e18bc65-42ae-4ad0-b564-67d7f66b979e","meta-llama4-fabricated-results-scandal-en","Meta's Llama 4 Scandal: Fabricated AI Test Results Unveiled","2026-03-25T16:29:15.482836+00:00",{"id":108,"slug":109,"title":110,"created_at":111},"bf888e9d-08be-4f47-996c-7b24b5ab3500","accenture-mistral-ai-deployment-en","Accenture and Mistral AI Team Up for AI Deployment","2026-03-25T16:31:01.894655+00:00",{"id":113,"slug":114,"title":115,"created_at":116},"5382b536-fad2-49c6-ac85-9eb2bae49f35","mistral-ai-high-stakes-2026-en","Mistral AI: Facing High Stakes in 2026","2026-03-25T16:31:39.941974+00:00",{"id":118,"slug":119,"title":120,"created_at":121},"9da3d2d6-b669-4971-ba1d-17fdb3548ed5","cursors-meteoric-rise-pressures-en","Cursor's Meteoric Rise Faces Industry Pressures","2026-03-25T16:32:21.899217+00:00"]