US Government Will Vet Pre-Release AI Models From Google, xAI and Microsoft

US Government Will Vet Pre-Release AI Models From Google, xAI and Microsoft

Source: Decrypt

Published:2026-05-05 16:21

BTC Price:$81542.0

#ai #regulation #nationalsecurity

Analysis

Price Impact

Low

The news focuses on ai regulation and national security concerns, which do not have a direct, immediate impact on cryptocurrency prices. while ai can influence trading strategies, this specific development is not tied to any cryptocurrency asset.

Trustworthiness

High

Price Direction

Neutral

There is no direct link or correlation between us government vetting of ai models and the price movement of cryptocurrencies. ai's impact on crypto is generally indirect through trading algorithms or platform development, not through regulatory news of this nature.

Time Effect

Long

The implications of ai regulation for the broader tech industry, including potential future impacts on ai development and integration, could be long-term. however, the direct effect on crypto is negligible in the short or long term based on this news alone.

Original Article:

Article Content:

In brief Google, Microsoft, and xAI agreed to give the U.S. government early access to frontier models for national security testing. President Donald Trump is reportedly considering an executive order regarding AI oversight, per a report. Anthropic's new Claude Mythos model, which excels at finding cybersecurity vulnerabilities, sparked government concern. Some of the biggest players in the AI world have agreed to give the U.S. government early access to their frontier models to test them ahead of public release, with the announcement coming one day after a report that President Donald Trump’s administration is weighing an executive order on the matter. On Tuesday, the Commerce Department's Center for AI Standards and Innovation announced that Google, Microsoft, and xAI have already agreed to provide the government with pre-release access to assess the systems' capabilities. "Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," said Chris Fall, the center's director, in a statement. "These expanded industry collaborations help us scale our work in the public interest at a critical moment." On Monday, the New York Times reported that the Trump administration is considering an executive order to create a working group that would review advanced AI models before public release. White House officials discussed the oversight plans with executives from Anthropic, Google, and OpenAI in meetings last week, per the report.  The executive order discussions were prompted partly by Anthropic's announcement last month that its breakthrough Claude Mythos model was adept at finding weak points in cybersecurity defenses, raising concerns among officials about national security implications. Rather than launch Claude Mythos to the public and potentially unleash a frenzied surge to both break and fix software like web browsers and operating systems, Anthropic has instead provided access to a limited number of startups and organizations. Mozilla said it was able to find and patch 271 vulnerabilities in its Firefox web browser using Mythos. Users on Myriad—a prediction market platform operated by Decrypt 's parent company, Dastan—don't believe that Anthropic will release Claude Mythos broadly by June 30, penciling in just a 13% chance as of this writing. Separately, the administration has clashed with Anthropic over model access. The Trump administration and Anthropic entered a contract dispute in February after Anthropic declined a request for unrestricted access to its AI models. Defense Secretary Pete Hegseth subsequently said he would designate Anthropic as a supply chain risk to national security. A federal appeals court refused to temporarily pause the designation while the lawsuit proceeds. Last week, however, Axios reported that the White House is weighing whether to reverse course and resume its partnership with Anthropic. The potential executive order marks a sharp reversal from Trump's earlier stance on AI regulation, as he’s advocated for minimal oversight of the industry. "We're going to make this industry absolutely the top, because right now it's a beautiful baby that's born,” he said last July . “We have to grow that baby and let that baby thrive. We can't stop it. We can't stop it with politics. We can't stop it with foolish rules and even stupid rules.” Since returning to office in 2025, Trump rolled back Biden-era regulatory requirements that asked AI developers to perform safety evaluations and report on models with potential military applications. On his first day in office, he revoked a 2023 executive order signed by former President Joe Biden that required developers of AI systems posing risks to share safety test results with the government before public release. More recently, the Trump administration has proposed a national framework for AI regulation , which would set national standards for the technology without creating a new regulator. The federal push comes amid state-led efforts to regulate AI, some of which have received pushback from Trump’s agencies— notably a Colorado law over “algorithmic discrimination.” Daily Debrief Newsletter Start every day with the top news stories right now, plus original features, a podcast, videos and more. Your Email Get it! Get it!