US Treasury presses Wall Street CEOs over cyber risks from Anthropic’s Claude Mythos AI
US financial regulators have quietly escalated their scrutiny of cutting‑edge artificial intelligence, summoning the heads of the country’s largest banks to discuss potential cyber threats linked to Anthropic’s newest model, Claude Mythos.
Treasury Secretary Scott Bessent convened senior executives at the department’s headquarters in Washington this week, with Federal Reserve chair Jerome Powell also reported to have attended. The high‑level session centered on the security implications of Claude Mythos, which Anthropic itself has described as presenting “unprecedented” cybersecurity risks.
The meeting came just weeks after portions of Mythos‑related code surfaced in an unauthorized leak, intensifying debate over whether the system could be weaponized. In a recent blog post, Anthropic warned that advanced AI models have already surpassed “all but the most skilled humans at finding and exploiting software vulnerabilities,” cautioning that the fallout for the global economy, public safety, and national security “could be severe.”
Because the gathering coincided with an industry event in Washington, invitations were targeted primarily at leaders of systemically important financial institutions – the large banks whose operational resilience is considered critical to overall financial stability. Disruptions at these firms, regulators argue, would not remain contained and could ricochet through payment systems, funding markets, and the broader economy.
Reported attendees included Goldman Sachs CEO David Solomon, Bank of America’s Brian Moynihan, Citigroup chief Jane Fraser, Morgan Stanley CEO Ted Pick, and Wells Fargo’s Charlie Scharf. JPMorgan Chase’s Jamie Dimon received an invitation but did not attend in person.
Dimon, however, underlined the scale of the problem in his annual shareholder letter released this week, calling cybersecurity “one of our biggest risks” and warning that artificial intelligence “will almost surely make this risk worse.” His comments echo growing unease in boardrooms over the dual‑use nature of powerful AI: a tool that can strengthen defenses, but also dramatically lower the barrier to sophisticated attacks.
Anthropic has said the still‑unreleased Claude Mythos model has already uncovered thousands of security flaws across operating systems, enterprise software, and widely used consumer applications. Given the sensitivity of these findings, access to Mythos has been tightly constrained to a small roster of large technology and infrastructure partners, including Amazon, Apple, and Microsoft.
This is the first time Anthropic has sharply limited a major model rollout. In addition to the big cloud providers, select infrastructure and semiconductor groups such as Cisco and Broadcom have been granted access, along with the Linux Foundation, reflecting the model’s potential impact on core internet and networking stacks.
Officials at the Treasury reportedly focused on how a model capable of rapidly surfacing obscure vulnerabilities might be misused if it escaped controlled environments. One concern is that malicious actors could instruct systems like Mythos to identify weak points in banking software, payment rails, or encryption systems that protect transaction data and customer accounts.
Anthropic has disclosed that some of the vulnerabilities identified by Mythos date back as far as 27 years. Many had remained invisible to both the original developers and professional security monitoring tools until the AI system flagged them, underscoring how much latent risk may be embedded in legacy code that underpins financial infrastructure.
The Treasury discussion also unfolded against the backdrop of a broader federal re‑evaluation of Anthropic’s risk profile. The US government recently designated the company as a potential supply chain risk, a classification that Anthropic is currently contesting in court. The designation suggests officials are worried not only about how the models are used, but also about the company’s role in critical digital supply chains.
Paradoxically, the increased regulatory pressure has coincided with a period of explosive financial growth for Anthropic. Despite ongoing scrutiny from the Treasury and the Department of Defense, the firm says it is experiencing unprecedented commercial demand for its products and infrastructure.
In an update published on April 6, Anthropic said its annualized revenue run rate had surpassed 30 billion dollars as of early April 2026, more than tripling from around 9 billion dollars at the close of 2025. This dramatic jump highlights how rapidly enterprises are integrating AI systems into core workflows, even as policymakers race to understand and mitigate the attendant risks.
A major driver of that revenue expansion has been new compute partnerships with Google and Broadcom. Under these agreements, Anthropic has secured multiple gigawatts of next‑generation TPU capacity, designed to power its frontier‑grade Claude models through 2027 and beyond. For regulators, those deals underscore the systemic scale of AI infrastructure now being deployed.
Anthropic’s agentic coding platform, Claude Code, has emerged as another central revenue pillar. The company says the product was generating more than 2.5 billion dollars in run‑rate revenue as of February, propelled by software teams using AI agents to write, refactor, and secure code at scale. Weekly active users on Claude Code have doubled since the start of the year, pointing to a rapid institutional shift toward AI‑augmented development environments.
For banks, that same capability is both an opportunity and a liability. On the one hand, financial institutions can deploy tools like Claude Mythos and Claude Code to harden their own systems – scanning millions of lines of code for subtle defects, automating patch management, and stress‑testing network defenses. On the other, attackers equipped with similar tools might be able to probe banking systems with unprecedented speed and precision.
Regulators are therefore increasingly framing advanced AI as a “force multiplier” for cyber activity. Tasks that once required teams of elite security researchers can now, in some cases, be performed or at least substantially accelerated by a single operator orchestrating an AI agent. For institutions that already face a barrage of phishing, ransomware, and credential‑stuffing attacks, the prospect of AI‑supercharged adversaries is alarming.
The leaked Mythos‑related code earlier this month added urgency to these concerns. While details remain scarce, officials fear that even partial access to system prompts, architectures, or evaluation tools could help sophisticated threat actors reverse‑engineer capabilities or tailor their own models to emulate Mythos’s strengths.
Government agencies are now weighing whether safeguards around high‑capability AI systems should mirror those used for dual‑use technologies in other sectors, such as export controls, strict access tiering, or mandatory incident reporting. The Treasury’s move to bring in bank CEOs indicates that any such framework will likely be developed hand‑in‑hand with the private sector rather than imposed unilaterally.
For their part, large banks are under pressure to demonstrate they can manage these emerging risks without stifling innovation. Many financial institutions are aggressively adopting AI to improve fraud detection, credit underwriting, trading strategies, and customer service. That makes a full retreat from advanced models like Mythos unlikely. Instead, executives are exploring internal “red‑team” deployments, where AI systems are used to attack and defend their own networks in controlled simulations.
Another dimension of concern is encryption. While current mainstream models are not close to breaking modern cryptographic schemes outright, tools like Mythos could automate the discovery of implementation errors, weak configurations, and side‑channel vulnerabilities in systems that rely on encryption. In practice, that might mean attackers gain access not by solving the math, but by exploiting overlooked engineering mistakes at scale.
The supply chain risk designation leveled at Anthropic also reflects worries about concentration. As a handful of companies come to dominate the most advanced AI infrastructure, any compromise, failure, or misuse at one provider could ripple out to thousands of dependent organizations, including banks, exchanges, and payment processors that rely on shared cloud and model platforms.
Anthropic is attempting to position itself as a responsible steward amid these debates, emphasizing controlled access, red‑teaming, and close collaboration with major technology partners. Yet its own success – measured in soaring revenue, expanding infrastructure deals, and accelerating enterprise adoption – ensures that both its models and its governance choices will remain at the center of policy conversations.
For the US Treasury, this week’s meeting was a signal that AI risk is no longer a niche concern for technical teams but a board‑level, system‑wide issue. As models like Claude Mythos push the frontiers of what software can do, the challenge for regulators and banks alike will be to harness their defensive potential without opening new doors for attackers.
In the coming months, observers expect more concrete guidance on how financial institutions should deploy high‑capability AI safely: stricter model governance, enhanced testing requirements, clearer accountability for AI‑driven incidents, and closer sharing of threat intelligence between government and industry. The Claude Mythos episode has effectively become a test case for how the US will manage the collision of frontier AI and critical financial infrastructure.
For now, access to Mythos remains tightly restricted, regulatory scrutiny is intensifying, and Anthropic continues to grow at a pace rarely seen in enterprise technology. The intersection of those three forces – security risk, government oversight, and commercial momentum – is likely to shape the next phase of AI policy for Wall Street and Washington alike.
