
Major tech companies are stepping up their cooperation with government regulators on AI safety testing, with Microsoft, Google DeepMind and xAI all signing new agreements to give early access to their most advanced models. The deals come as both US and UK authorities ramp up efforts to understand and mitigate potential risks from cutting-edge AI systems.
Microsoft announced it is working with the AI Security Institute (AISI) in the UK and the US Centre for AI Standards and Innovation (CAISI) to assess its advanced models and prepare protections. The software giant said government scrutiny actually helps improve its AI systems, explaining that “well-constructed tests help us understand whether our systems are working as intended” and keeps the company alert to risks like AI-driven cyberattacks and criminal misuses.
The timing appears significant given recent tensions between US regulators and AI companies. The US government had a well-publicized dispute with AI company Anthropic in recent weeks, making these new cooperation agreements particularly notable. CAISI director Chris Fall called the deals “essential” for understanding potential dangers and said they come at a “critical moment.”
CAISI, which operates as a unit of the US Department of Commerce, acts as the primary government contact point for AI testing, research and best practices development. The organization previously signed safety and evaluation deals with Anthropic and OpenAI in 2024, and has now completed at least 40 evaluations to date.
The agreements grant government researchers early access to frontier AI models before they’re released publicly. This early access model is becoming the standard approach for AI oversight, allowing regulators to identify potential security vulnerabilities and misuse risks before systems reach the market.
The companies have committed to work on improving the US government’s understanding of AI capabilities, potential national security risks, and the competitive landscape. This cooperation reflects growing recognition that advanced AI systems could pose significant risks if not properly tested and secured.
Recent testing has shown both progress and ongoing challenges in AI safety. The UK’s AISI noted in its 2025 Frontier AI Trends Report that “longer, more sophisticated attacks” are now required to jailbreak the most protected AI models for certain malicious requests. However, the institute cautioned that “the efficacy of safeguards varies between models” and no system tested was completely free of vulnerabilities.
This government-industry cooperation represents a shift toward more proactive AI regulation. Rather than waiting for problems to emerge, regulators are working directly with companies during development to identify and address potential risks. The approach could become a template for AI oversight globally as other countries grapple with similar challenges around advanced AI systems.