Google AI deal with Pentagon marks a new era of military tech cooperation
AI & Data

Google AI deal with Pentagon marks a new era of military tech cooperation

James Whitemore··Updated

Google signed a classified AI deal with the Pentagon allowing Gemini to be used for any lawful government purpose. Over 600 employees opposed it. Google signed it anyway.

Google has signed a classified AI deal with the Pentagon that allows the US Department of Defense to use its Gemini AI models for any lawful government purpose, including on classified networks. The agreement, first reported by The Information, is an amendment to an existing unclassified contract Google had already signed with the DoD in late 2024. More than 600 Google employees, including researchers at Google DeepMind, signed an open letter urging CEO Sundar Pichai to reject it before it was finalized. Google signed it anyway.

The deal positions Google alongside OpenAI and Elon Musk's xAI as AI companies that have agreed to make their models available on classified Pentagon infrastructure. The Pentagon signed agreements worth up to $200 million each with major AI labs in 2025, and the fiscal 2026 defense budget included $13.4 billion dedicated to AI and autonomy. The scale of what is being built has moved well past the experimental phase. It is now an industrial buildout treating AI as foundational military capability.

What makes this deal notable is not just what Google agreed to, but what it gave up. According to reports, the agreement requires Google to assist in adjusting its AI safety settings and filters at the government's request. It also specifies that Google cannot control or veto lawful government operational decision-making, a provision directly aimed at preventing a repeat of what happened with Anthropic, which attempted to impose its own guardrails on Pentagon usage and publicly saw those negotiations collapse. Google, by contrast, accepted terms that explicitly limit its own ability to restrict how its technology is used once deployed inside classified systems.

For context, Google removed the passage from its AI principles in February 2025 that had pledged to avoid using AI in weapons systems or technologies that could facilitate injury to people. DeepMind CEO Demis Hassabis co-authored a blog post justifying that removal, citing global competition for AI leadership. That decision cleared the internal policy path for the Pentagon deal that followed. By December 2025, the Pentagon had already launched GenAI.mil powered by Google's Gemini chatbot. By March 2026, Gemini AI agents had been deployed to the Pentagon's three-million-strong workforce at the unclassified level. The classified expansion announced last week is the next step in a progression that Google's own leadership has been building toward for three years.

The employee response this time is measurably different from 2018, when Google pulled out of Project Maven after roughly 4,000 employees signed a petition and a dozen resigned in protest. In 2026, 600 signatures did not move the needle. Current and former employees told Fortune that the leverage tech workers once held over corporate AI policy has eroded significantly. Google's internal memo to staff said the company "proudly" works with the US military and plans to continue. That is a different posture than anything the company would have published in 2018.

For the MENA region, the implications are layered. Gulf states including the UAE and Saudi Arabia are deepening their own AI infrastructure investments through entities like Humain and G42, both of which have existing or prospective relationships with US defense and intelligence ecosystems. The normalization of classified AI deployments by major US tech companies at the Pentagon level signals that the broader direction of AI governance is moving toward fewer restrictions on military use, not more. That trajectory shapes the environment in which MENA governments and their AI partners will operate, and it raises the same unresolved questions about autonomous weapons and surveillance in contexts where oversight frameworks are even less developed than they are in the US.

Share:
J

James Whitemore

@JWhitemoreTech

James Whitemore is TechScoop's International Technology Correspondent, bridging the gap between global tech trends and their impact on the MENA region. With 36 articles exploring everything from AI breakthroughs to climate tech innovations, James brings a unique perspective shaped by his experience covering Silicon Valley and European tech hubs. His feature stories on cross-border investments and international expansion strategies have become essential reading for founders looking to scale globally.

View Bio →

Mentioned in This Article

Related Articles

View all →
Advertisement