Anthropic has committed to spending $200 billion with Google Cloud over five years. The deal accounts for more than 40% of the $462 billion backlog Google disclosed to investors last week.
When Google reported a $462 billion cloud backlog last week, the number raised eyebrows across the industry. A single line buried in a report published Tuesday by The Information explains a significant portion of it. Anthropic has committed to spending $200 billion with Google Cloud over five years, a figure that accounts for more than 40% of the entire backlog Google disclosed to investors. That one contractual relationship, between an AI lab and its primary cloud and chip supplier, is now one of the largest infrastructure commitments in the history of the technology industry.
The deal formalizes and dramatically expands a relationship that had already been deepening for months. In April 2026, Anthropic signed an agreement with Google and chip partner Broadcom for multiple gigawatts of next-generation TPU capacity expected to come online starting in 2027. Alphabet is also investing up to $40 billion directly in Anthropic, meaning the two companies are simultaneously investor and investee, customer and supplier, and competing AI developers operating in the same market. That layered relationship is unusual at this scale and speaks to how intertwined the economics of frontier AI have become with the infrastructure decisions of a small number of very large companies.
The $200 billion figure needs context to understand what it actually represents. It is not a check that has been written. It is a contractual spending commitment, a binding agreement that Anthropic will route a defined volume of compute spending through Google Cloud over the next five years. The compute in question covers cloud infrastructure and TPU chips, the specialized silicon Google builds for AI training and inference workloads. For Anthropic, locking in this volume of capacity is an acknowledgment that the constraint on how fast it can train and deploy models is not talent or ideas but physical compute, and that securing it in advance at scale is now a competitive necessity rather than a procurement decision.
Anthropic is not putting all of its infrastructure in one place. The same week the Google commitment was reported, Anthropic also has a multi-year deal with cloud infrastructure firm CoreWeave and is on track to secure nearly one gigawatt of capacity through Amazon's chips by the end of 2026. In late April, it separately announced an expanded collaboration with Amazon for up to five gigawatts of capacity for training and deploying Claude. Running parallel infrastructure relationships with Google, Amazon, and CoreWeave simultaneously is expensive and operationally complex, but it gives Anthropic redundancy and negotiating leverage that a single-vendor dependency would not.
The scale of spending involved in staying competitive at the frontier of AI is becoming difficult to comprehend in conventional business terms. Previous projections estimated that Anthropic's server costs in 2026 alone could reach $20 billion. The five-year $200 billion Google commitment implies an average of $40 billion per year just with one provider. Against that backdrop, Alphabet's decision to raise its own 2026 capital expenditure guidance to as much as $190 billion, and its statement that 2027 capex will increase significantly from there, starts to make structural sense. The infrastructure required to train, host, and serve frontier AI models at the scale that commercial deployment demands is a multi-hundred-billion-dollar undertaking that only a handful of companies are positioned to finance.
For the MENA region, the concentration of AI infrastructure power in a small number of US-based cloud and chip providers is a direct challenge to the ambitions Gulf states have been articulating. Saudi Arabia's Humain and Abu Dhabi's various AI initiatives are explicitly attempting to build sovereign AI infrastructure rather than remain dependent on foreign providers. The Anthropic-Google deal illustrates exactly why that ambition is so difficult to execute. When a single AI lab commits $200 billion to one cloud provider over five years, the capital and technology required to build independent infrastructure at competitive scale becomes a generational investment rather than a near-term project. The Gulf has sovereign wealth to deploy, but the gap between aspiration and the physical reality of frontier compute remains significant.