CoinsValue.net logo CoinsValue.net logo
Cryptopolitan 2026-05-05 21:15:22

White House now wants to review AI models before they are released to the public

The White House is considering a plan to review some of the most powerful artificial intelligence systems before they are released to the public. The proposal, first reported by The New York Times on May 4, would introduce federal scrutiny at a critical point in the AI lifecycle, just before deployment. Officials are increasingly concerned that frontier models are now capable of identifying and exploiting weaknesses in the software that underpins essential infrastructure. An analysis published by The Conversation points to growing evidence that recent AI systems can uncover large numbers of vulnerabilities in operating systems and web browsers. That capability, while valuable for defensive security, also raises the stakes if such tools were to be misused or fall into the wrong hands. Model capabilities prompt policy rethink The policy discussion gained urgency after Anthropic opted to delay wider release of its latest model, Mythos . Internal testing revealed advanced cybersecurity capabilities, including the ability to identify numerous exploitable flaws. In response, the company restricted access to a limited group of organizations responsible for critical infrastructure through its “Project Glasswing” initiative. According to The Conversation , the White House stepped in when Anthropic explored expanding access, signaling a more hands-on approach to AI oversight even as broader tech policy has remained relatively market-driven. Concerns are not limited to a single company or system. The UK AI Safety Institute reported in an April evaluation that OpenAI GPT-5.5 demonstrated comparable performance on advanced cybersecurity tasks. In one test highlighted by the institute, the model reverse-engineered a custom virtual machine and solved a complex challenge in minutes, far quicker than a human expert using professional tools. “A key question was whether this reflected a breakthrough specific to one model, or part of a broader trend,” the institute wrote in its report. “Results from an early checkpoint of GPT-5.5 suggest the latter.” Taken together, the findings suggest these capabilities are becoming a common feature of leading AI systems, rather than an isolated breakthrough. AI security concerns take on a geopolitical dimension The implications extend beyond technology into national security. Researchers cited by The Conversation warn that state-linked groups could use similar tools to carry out cyber operations, potentially targeting infrastructure or economic systems. There are already early signs of this shift. Anthropic has reported suspected state-backed actors using its models in espionage campaigns affecting dozens of organizations. Meanwhile, Microsoft and OpenAI said in 2024 that government-affiliated groups were leveraging AI to enhance cyberattacks. At the same time, researchers are still grappling with how to reliably control these systems. Work cited by The Conversation suggests that safety filters applied after training can be bypassed, while some models may appear compliant without fully eliminating risky capabilities. This has led to a growing view among policymakers that safety measures may need to be embedded during model development rather than added later. Policy outlook: fragmented frameworks, rising pressure The U.S. proposal is taking shape within a broader, still-evolving global regulatory landscape: The EU AI Act sets out a risk-based framework with strict requirements for high-risk applications, though it does not mandate centralized approval for frontier models. The UK has leaned on voluntary cooperation through the UK AI Safety Institute, focusing on testing and evaluation partnerships with developers. The emerging U.S. approach appears to be moving toward direct oversight of the most advanced systems, potentially through pre-release review. Lawmakers have begun examining these issues more closely, with congressional hearings in April on AI safety and governance, though no comprehensive legislation has yet advanced. If implemented, a U.S. pre-release review system would represent a shift toward earlier intervention, placing oversight at the point where risks can be anticipated rather than after they materialize. Such a move could set an informal global standard, particularly given the concentration of leading AI developers in the United States. At the same time, it raises the possibility that companies may shift parts of development or deployment to regions with fewer restrictions. The challenge for policymakers is balancing innovation with risk management in a field where capabilities are advancing quickly, and the consequences of misuse could be far-reaching. For now, the most concrete safeguards remain voluntary. Companies such as Anthropic are limiting access to sensitive systems, while organizations like the UK AI Safety Institute continue to run independent evaluations. Whether the United States formalizes its approach—and whether other countries align with it—will be a key factor in shaping how AI is governed in the years ahead. Your bank is using your money. You’re getting the scraps. Watch our free video on becoming your own bank

Read the Disclaimer : Coinsvalue.net