Google, Microsoft, xAI Agree to Pre-Release AI Reviews by US Government
In a significant expansion of federal oversight, Google DeepMind, Microsoft, and Elon Musk's xAI have agreed to allow the U.S. government to examine new artificial intelligence models before they are released publicly. The Commerce Department's Center for AI Standards and Innovation (CAISI) announced Tuesday that it will conduct pre-deployment evaluations and targeted research on these frontier AI systems.
CAISI, which began reviewing models from OpenAI and Anthropic in 2024, stated it has already completed 40 evaluations. Both OpenAI and Anthropic have renegotiated their existing agreements with the center to align with priorities set by President Donald Trump’s administration, according to the announcement.
"Pre-deployment evaluations help us identify potential risks early, from bias to security vulnerabilities," said Dr. Elena Marchetti, CAISI’s director of evaluation. "By integrating these checks before a model reaches the market, we can ensure that critical safety standards are met."
Background
CAISI was established within the National Institute of Standards and Technology to address the unique challenges posed by advanced AI systems. The center originally focused on voluntary reviews with a handful of companies, but the new agreements with Google, Microsoft, and xAI mark a widening of its scope.

The move comes amid broader global debates on AI regulation. The U.S. has favored a voluntary, industry-led approach, but critics argue that independent pre-release testing is essential given the speed of AI development. The Trump administration has emphasized American leadership in AI while also expressing concerns about national security risks.
Industry analyst Sarah Kline remarked, "This is an early signal that even the largest tech firms are willing to submit to government scrutiny to maintain public trust and avoid potential legislative crackdowns."

What This Means
The agreement sets a precedent for closer collaboration between the federal government and AI developers. For companies, joining the program may offer reputational benefits and a smoother path to eventual regulatory compliance.
For consumers and businesses, the reviews could lead to earlier detection of flaws in AI products, such as inaccurate outputs, privacy leaks, or harmful biases. However, the process remains voluntary, and companies are not required to delay releases based on CAISI’s findings.
"The effectiveness of these evaluations will depend on how transparent the companies are and whether the government can keep pace with rapid model updates," said Marchetti. "Our goal is to build a framework that evolves with the technology."
Looking ahead, observers expect other major AI players like Meta and Amazon to face similar pressure to join. The program may also influence international standards, as other nations watch the U.S. approach to pre-launch AI testing.
For now, the reviews cover only frontier models—the most advanced and capable systems. CAISI has not disclosed whether it will extend testing to smaller, specialized models used in healthcare, finance, or education.
Related Articles
- 10 Key Updates in Safari Technology Preview 238 You Should Know
- Beyond Bots vs. Humans: The New Frontier of Web Protection
- 10 Critical Insights into How the FBI Extracted Deleted Signal Messages from iPhone Notification Data
- Decoding Palantir's Record Quarter: A Practical Guide to Earnings Report Analysis
- Bridging the Gap: Why Good Designers Create Inaccessible Websites and How to Fix It
- Apple's Product Roadmap Under Microscope: Silicon-Carbon Batteries, Foldable iPhones Dominate Listener Q&A
- XPENG Sales Surge 44.7% After VLA 2.0 Launch: Key Questions Answered
- How to Break Free from Twitter and Protect Your Sanity