Anthropic, a frontier AI company, revealed recently that it had trained a large language model called Mythos. Unlike almost all similar announcements from AI firms, this wasn’t a product release. Anthropic—a developer known for its focus on safety—claims the model is too dangerous to be made generally available because of a significant leap in its hacking abilities. Mythos, the firm says, could knock out software systems that power plants, banks and even armed forces rely on every day. To simply release it, then, would be irresponsible, at least in Anthropic’s telling.
Mythos, the firm says, could knock out software systems that power plants, banks and even armed forces rely on every day. To simply release it, then, would be irresponsible, at least in Anthropic’s telling. (REUTERS FILE PHOTO)
Some have cast doubt on whether the model is really so dangerous. But the firms that joined the company’s Project Glasswing—an effort to patch security vulnerabilities identified by Mythos in the world’s most important software—including JPMorgan Chase, Apple and Microsoft, all seem to think the potential is real. So, too, does America’s government, with Federal Reserve Chair Jay Powell and Treasury Secretary Scott Bessent convening an urgent gathering of bank leaders following the Mythos announcement to discuss its potential impact on financial stability.
What is more, to close observers of frontier AI, Mythos is not an especially surprising development. In fact, it is right about on trend for a model of its size and trained with its level of computing power. The potential for models to develop expert-level, or even superhuman, hacking abilities has been widely discussed and forecasted for years. This is why the issue figured prominently in the Trump administration’s AI Action Plan, which I helped draft while working in the White House last year.
Nonetheless, Mythos raises a vital question: should models this capable ever see the light of day, or should they remain under government control? Are these weapons, or the next big thing in consumer and enterprise software?
The problem is that the answer is both. As such, “victory” in the global AI race means frontier AI will need to diffuse, rapidly, transforming the productivity of firms throughout the economy.
Nationalisation of the frontier AI companies by the Pentagon or another government agency would restrict the very dynamism America and its allies need to stay ahead—not to mention the access to global pools of talent and capital, both of which could become harder for a government-run project. Would a “Manhattan Project” for AI really be able to employ foreign nationals, who currently populate as much as half of frontier-lab research staff? And would the American public, who already take a dim view of the construction of privately funded data centres, really be comfortable with hundreds of billions, if not trillions, of their tax dollars going directly to build data centres? It seems unlikely.
A Food and Drug Administration-style licensing regime may be a bit lighter-touch, but the problem is scope. The range of risks posed by frontier AI, or indeed any general-purpose technology, is so broad that one struggles to imagine how a single licensing body could get its hands around them all. And what about political pressure from either the AI industry or anti-AI interest groups? A licensing regime is simply too deep a concentration of power over too fundamental and diffuse a technology to be sensible.
But if Mythos makes one thing clear, it is that the laissez-faire approach is unlikely to work either. Critics of AI regulation assert, correctly, that America succeeded with computers, software and the internet because of its deregulatory posture towards those technologies. But AI really is different. We must not forget the lessons of recent technology waves—a light-touch approach should be our inclination—but neither must we blindly follow the regulatory playbook of the past even when circumstances change.
What the West will need, then, is that rarest of policies: a sensible middle ground. We could begin by fostering transparency into the most serious categories of AI risks: cyber-attacks, bioweapons development, long-range autonomous systems and the like. California and New York have already led in America with light-touch approaches. The federal government could make this a nationwide standard while funding the US Centre for AI Standards and Innovation and other government agencies to conduct specialised testing of frontier AI systems for national-security risks.
Going a step further, we should also create a network of private organisations that can verify the safety claims and security procedures within frontier AI companies. Early examples of these organisations already exist in the form of groups like Model Evaluation and Threat Research, Apollo Research and the AI Verification and Evaluation Research Institute. These are independent, nonpartisan sources of expertise that could conduct audit-like inspections of the biggest AI developers to ensure that their claims about model safeguards match reality. They could also answer specific questions of federal officials about national security, geopolitics and so on.
Open your eyes but don’t look down
Mythos heralds a new era of AI policymaking. The stakes are higher now, and the training-wheels have come off. This does not mean we should panic or stumble into creating a massive bureaucracy to oversee the most important technology of the century so far. It means we should be hard-nosed and open-eyed about the challenges, rather than optimistic for the sake of optimism. Properly managed, there is much to be excited about. But that management requires a tightrope walk between dynamism and prudence, and the time has come to take the first steps.
Dean Ball is a senior fellow at the Foundation for American Innovation and author of the Hyperdimensional newsletter. He was previously senior policy adviser for AI at the White House and co-drafted America’s AI Action Plan, released in July 2025.