CoinsValue.net logo CoinsValue.net logo
Cryptopolitan 2026-05-05 21:15:22

White House now wants to review AI models before they are released to the public

The White House is considering a plan to review some of the most powerful artificial intelligence systems before they are released to the public. The proposal, first reported by The New York Times on May 4, would introduce federal scrutiny at a critical point in the AI lifecycle, just before deployment. Officials are increasingly concerned that frontier models are now capable of identifying and exploiting weaknesses in the software that underpins essential infrastructure. An analysis published by The Conversation points to growing evidence that recent AI systems can uncover large numbers of vulnerabilities in operating systems and web browsers. That capability, while valuable for defensive security, also raises the stakes if such tools were to be misused or fall into the wrong hands. Model capabilities prompt policy rethink The policy discussion gained urgency after Anthropic opted to delay wider release of its latest model, Mythos . Internal testing revealed advanced cybersecurity capabilities, including the ability to identify numerous exploitable flaws. In response, the company restricted access to a limited group of organizations responsible for critical infrastructure through its “Project Glasswing” initiative. According to The Conversation , the White House stepped in when Anthropic explored expanding access, signaling a more hands-on approach to AI oversight even as broader tech policy has remained relatively market-driven. Concerns are not limited to a single company or system. The UK AI Safety Institute reported in an April evaluation that OpenAI GPT-5.5 demonstrated comparable performance on advanced cybersecurity tasks. In one test highlighted by the institute, the model reverse-engineered a custom virtual machine and solved a complex challenge in minutes, far quicker than a human expert using professional tools. “A key question was whether this reflected a breakthrough specific to one model, or part of a broader trend,” the institute wrote in its report. “Results from an early checkpoint of GPT-5.5 suggest the latter.” Taken together, the findings suggest these capabilities are becoming a common feature of leading AI systems, rather than an isolated breakthrough. AI security concerns take on a geopolitical dimension The implications extend beyond technology into national security. Researchers cited by The Conversation warn that state-linked groups could use similar tools to carry out cyber operations, potentially targeting infrastructure or economic systems. There are already early signs of this shift. Anthropic has reported suspected state-backed actors using its models in espionage campaigns affecting dozens of organizations. Meanwhile, Microsoft and OpenAI said in 2024 that government-affiliated groups were leveraging AI to enhance cyberattacks. At the same time, researchers are still grappling with how to reliably control these systems. Work cited by The Conversation suggests that safety filters applied after training can be bypassed, while some models may appear compliant without fully eliminating risky capabilities. This has led to a growing view among policymakers that safety measures may need to be embedded during model development rather than added later. Policy outlook: fragmented frameworks, rising pressure The U.S. proposal is taking shape within a broader, still-evolving global regulatory landscape: The EU AI Act sets out a risk-based framework with strict requirements for high-risk applications, though it does not mandate centralized approval for frontier models. The UK has leaned on voluntary cooperation through the UK AI Safety Institute, focusing on testing and evaluation partnerships with developers. The emerging U.S. approach appears to be moving toward direct oversight of the most advanced systems, potentially through pre-release review. Lawmakers have begun examining these issues more closely, with congressional hearings in April on AI safety and governance, though no comprehensive legislation has yet advanced. If implemented, a U.S. pre-release review system would represent a shift toward earlier intervention, placing oversight at the point where risks can be anticipated rather than after they materialize. Such a move could set an informal global standard, particularly given the concentration of leading AI developers in the United States. At the same time, it raises the possibility that companies may shift parts of development or deployment to regions with fewer restrictions. The challenge for policymakers is balancing innovation with risk management in a field where capabilities are advancing quickly, and the consequences of misuse could be far-reaching. For now, the most concrete safeguards remain voluntary. Companies such as Anthropic are limiting access to sensitive systems, while organizations like the UK AI Safety Institute continue to run independent evaluations. Whether the United States formalizes its approach—and whether other countries align with it—will be a key factor in shaping how AI is governed in the years ahead. Your bank is using your money. You’re getting the scraps. Watch our free video on becoming your own bank

면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.