Elon Musk has escalated the increasingly personal rivalry among top AI founders, branding Anthropic’s AI as “misanthropic and evil” just hours after the company unveiled a massive new funding round.
On Thursday, Anthropic announced it had secured $30 billion in fresh capital in a Series G round, giving the Claude developer a staggering post-money valuation of $380 billion. That figure instantly places Anthropic among the most highly valued private companies globally, and the single most valuable AI lab outside of OpenAI.
The round was co-led by Singapore’s sovereign wealth fund GIC and investment firm Coatue, underscoring how aggressively large institutional investors are now backing frontier AI labs. In its announcement, Anthropic said the money would go toward deepening its research pipeline, accelerating product innovation, and expanding the infrastructure needed to make its Claude models broadly available to enterprise and consumer customers worldwide.
Musk’s attack landed almost immediately after Anthropic’s celebratory post about the raise. Quoting the announcement, he dismissed the company’s work in dramatic terms, calling its AI “misanthropic and evil” and framing the lab itself as a threat rather than a force for beneficial progress. The choice of words marks one of his sharpest public criticisms of a rival AI company to date.
The clash comes at a moment when competition among AI labs has moved far beyond technical benchmarks and product features and into a public battle of narratives, values, and personal reputations. Musk, who co-founded OpenAI before cutting ties with the organization, now runs his own AI venture, xAI, and frequently portrays himself as a defender of human-centric, truth-seeking AI against what he characterizes as ideologically captured or manipulative systems.
Anthropic, by contrast, has built its brand around ideas of “constitutional AI,” safety, and alignment—designing models like Claude to follow a written “constitution” of principles aimed at making them more helpful, honest, and harmless. Critics, including Musk, argue that such attempts to hard-code values into AI systems can slide into political or cultural bias, potentially marginalizing certain viewpoints or shaping user behavior in subtle ways. Musk’s “misanthropic” label appears to be an extension of that critique, implying that Anthropic’s models—or the norms they embody—are ultimately anti-human or hostile to ordinary users’ interests.
The scale of Anthropic’s new raise is remarkable in its own right. A $30 billion round at a $380 billion valuation signals that investors now see leading AI labs less like traditional startups and more like strategic infrastructure akin to major chip manufacturers or cloud computing giants. Such valuations rest on the assumption that foundation models will underpin everything from productivity tools and search to robotics, healthcare, and national security.
It also reflects a broader consolidation trend: a small number of labs—OpenAI, Anthropic, Google DeepMind, xAI, and a handful of others—are racing ahead with enormous compute budgets, proprietary data pipelines, and tight partnerships with cloud platforms and chip makers. For these players, raising tens of billions in a single round is not excess; it is framed as a prerequisite to remain competitive at the frontier of model training.
Anthropic’s own statement about the raise emphasized exactly this scale imperative. The company said the new capital will allow it to “deepen our research” and “ensure we have the resources to power our infrastructure expansion” as it pushes to make Claude “available everywhere our customers are.” In practice, that means funding huge training runs, maintaining fleets of GPUs and specialized accelerators, and rolling out Claude across global markets and industries.
Musk’s critique taps into growing public anxiety about who controls such powerful systems and what values they encode. His accusation that Anthropic’s AI is “evil” is not backed by specific technical claims in his comment, but it does resonate with a broader debate: Are today’s leading models being optimized to serve users, or to serve the economic and ideological agendas of those who build and fund them?
Supporters of Anthropic argue that the company is at least attempting to be transparent about its alignment approach, publicly discussing its safety methods and guardrails. They see constitutional AI as an explicit, inspectable framework, preferable to opaque, ad-hoc moderation layers. Detractors counter that any “constitution” is ultimately written by a small group of people with particular cultural, political, and philosophical assumptions—raising concerns about centralization of power over information and discourse.
The rivalry is also personal and strategic. Musk has repeatedly criticized OpenAI, accusing it of drifting from its original nonprofit mission and becoming too closely aligned with big tech and corporate interests. By attacking Anthropic, he is now targeting another key pillar of the emerging AI ecosystem—one that competes directly with his own xAI models in areas like coding assistants, chatbots, and enterprise tools. The war of words doubles as a fight for mindshare, talent, and capital.
At the same time, the episode illustrates how AI governance and branding have become inseparable. Funding announcements that once would have focused purely on valuation and technical milestones now immediately trigger arguments about ethics, safety, censorship, and the future of humanity. Each major lab is not just selling model quality; it is selling a story about what kind of world its technology will help create.
Investors pouring tens of billions into Anthropic are betting that enterprises and governments will prefer its brand of safety-conscious, aligned AI for sensitive domains such as legal work, medical support, and financial decision-making. Musk is betting that there is a large and growing audience for models that position themselves as less filtered, more “truth-first,” and less influenced by what he calls “woke” or ideological constraints.
For users and regulators, the clash underscores a hard reality: the direction of frontier AI is increasingly set by a small group of ultra-capitalized firms and powerful individuals whose disagreements play out not only in research papers and product updates, but also in public insults and social media skirmishes. As valuations soar into the hundreds of billions, the stakes—economic, political, and cultural—rise alongside them.
What happens next will likely depend less on any single remark from Musk and more on how these labs perform in the market and in policy arenas. If Anthropic can demonstrate that its safety-centric approach scales globally and wins trust from major institutions, the latest funding round could entrench it as a long-term pillar of the AI landscape. If, on the other hand, concerns about over-curation or bias gain traction, criticism like Musk’s could fuel demand for alternative models and governance philosophies.
Either way, the combination of a $30 billion funding round and a high-profile denunciation from one of the world’s most visible tech CEOs signals a new phase for AI: one in which capital, ideology, and personality are colliding in plain view, and where every major announcement becomes another front in a rapidly intensifying battle over what artificial intelligence should be—and who it should serve.
