Ai leaves sam altman ‘useless and sad’ as x turns vulnerability viral

AI Left Sam Altman ‘Useless and Sad’—and X Turned It Into a Pile-On

OpenAI CEO Sam Altman tried to share a rare moment of vulnerability about his own technology—and the internet did what it usually does: it pounced.

Altman revealed that while experimenting with Codex, OpenAI’s AI coding assistant, he suddenly felt “a little useless” and found the experience “sad.” The admission came in a late-night post on X, where he reflected on what it felt like to watch an AI system outperform him at a task he once did himself.

Codex: The Tool That Outshined Its Creator

Codex is OpenAI’s AI-powered software engineering assistant. It’s built to help developers with a wide range of coding tasks, including:

– Proposing and writing new features
– Tracking down and fixing bugs
– Answering technical questions about an existing codebase
– Running and interpreting tests
– Suggesting pull requests, all within a sandboxed environment that can understand and manipulate real code

In other words, Codex is not just an autocomplete-on-steroids; it’s designed as a semi-autonomous collaborator that can meaningfully participate in software development.

While testing the tool by building an app, Altman said Codex began generating feature ideas that were simply better than the ones he was coming up with. For someone who has spent years in the world of startups and product design, watching a machine beat him at his own game triggered something deeply human: a sense of dislocation and loss of relevance.

He acknowledged the broader promise of AI—hinting that humanity will eventually discover “much better and more interesting ways to spend our time”—but admitted he felt “nostalgic for the present,” for a world where humans still clearly led the creative process.

X Users Didn’t Offer Sympathy

If Altman was hoping for a thoughtful conversation about the emotional impact of AI, he misread the mood.

Responses on X quickly veered away from empathy and toward mockery, criticism, and pent-up frustration. Many users took his confession as an opportunity to:

– Roast the emotional tone of his post
– Accuse him of downplaying the consequences of the technologies he champions
– Vent about job losses, shrinking salaries, and downward pressure on creative and technical fields

Some users essentially argued that if Altman, one of the most powerful figures in AI, felt “useless,” that was nothing compared to how displaced many workers felt watching AI tools encroach on their livelihoods.

Others leaned into dark humor: if the architect of the AI boom is feeling obsolete, they suggested, perhaps that’s poetic justice.

A Lightning Rod for Anger Over AI and Jobs

The backlash wasn’t just about one post. It tapped into a broader resentment that has been building for months as AI systems advance rapidly and begin to automate tasks once considered safely “creative” or “skilled.”

Developers, designers, copywriters, illustrators, and even lawyers have publicly worried that AI tools are:

– Reducing the amount of work available
– Compressing wages by driving demand toward cheaper, AI-assisted labor
– Forcing workers to become “prompt operators” instead of craftspeople
– Making it harder for juniors to learn on the job, as AI handles basic tasks they would have cut their teeth on

In that context, Altman’s admission sounded to many less like a vulnerable reflection and more like a distant acknowledgement from someone insulated by wealth, equity, and influence.

The Paradox of Building Tools That Replace You

Altman’s reaction exposes one of the central paradoxes at the heart of AI: the people driving the technology forward are also, at some level, designing systems that can outperform them personally.

Historically, this isn’t entirely new. Calculators outperformed human arithmetic. Spreadsheets beat ledger books. Search engines outclassed librarians in speed and breadth. But the emotional reaction this time feels sharper, because AI isn’t just doing repetitive work—it’s increasingly encroaching on idea generation, planning, and problem-solving.

For a founder or engineer, having an AI tool suggest better product features, cleaner code, or smarter architectures cuts close to identity. It raises questions like:

– What is my comparative advantage if the machine ideates and executes faster?
– Am I here to build, or just to supervise and sign off?
– If leadership is about judgment, what happens when models start to approximate that too?

Altman’s nostalgia “for the present” hints at unease about how fast this future is arriving—even for those steering the ship.

Why His Vulnerability Landed So Poorly

On paper, a CEO admitting vulnerability about his own tech sounds like what critics often ask for: humility and honesty. Yet in practice, the reaction was overwhelmingly hostile. That disconnect says a lot about the current climate around AI.

There are several reasons it fell flat:

1. Timing and Power Imbalance
Many people already feel they’re bearing the costs of AI—unstable work, pressure to upskill constantly, or the fear of automation—while tech leaders reap most of the rewards. In that light, a billionaire-adjacent CEO complaining about feeling “useless” can come off as tone-deaf.

2. Abstract Optimism vs. Concrete Harm
When Altman talks about humans figuring out “more interesting ways to spend our time,” critics hear a familiar narrative: vague, long-term optimism paired with very real, short-term disruption. Without a clear, concrete path for workers whose skills are being displaced, that optimism can feel hollow.

3. Involuntary vs. Voluntary Obsolescence
Altman is voluntarily experimenting with tools that make him feel obsolete. Millions of workers don’t have that luxury; the tools are being imposed on their industries whether they like it or not. That difference in agency changes how similar emotions are received.

4. Platform Culture
X is not a forgiving space for nuance. It rewards outrage, dunking, and sharp one-liners. A reflective, melancholic post from a high-profile figure was almost guaranteed to become a target.

AI, Identity, and the Fear of Being Replaceable

Underneath the online sarcasm lies a quieter, more universal anxiety: if AI can outperform us at what we’re good at, what does that make us?

For programmers, Codex and similar tools trigger identity-level questions. Coding isn’t just a job; for many, it’s a craft, a puzzle, and a source of pride. When an AI can, in seconds, generate solutions that might take a human hours, the emotional hit isn’t simply economic. It’s existential.

The same dynamic appears in other fields:

– Writers watching language models draft readable copy in seconds
– Artists seeing image models produce complex visuals from simple prompts
– Musicians hearing AI generate instrumentals or melodies in their style

The fear isn’t only, “Will I get paid for this?” It’s also, “If a machine can do it, was this ever really special?”

Altman’s reaction to Codex is a high-profile version of what many others quietly feel and rarely say out loud.

Could AI Actually Make Work More Human?

There is another possible reading of Altman’s comment about “more interesting ways to spend our time.” If AI truly handles much of the routine, repetitive, or even technical heavy lifting, human work might shift toward areas where we still have an edge—or where the human presence is valued regardless of efficiency:

– Deep interpersonal care and support
– Complex, ambiguous decision-making involving ethics and tradeoffs
– Negotiation, persuasion, and leadership in messy human systems
– Taste-driven work, where the creator’s identity is part of the value

In software development, for example, AI could eventually handle much of the syntax, boilerplate, and standard patterns, while humans focus more on high-level architecture, product strategy, user empathy, and cross-functional alignment.

The problem is not that this future is impossible—it might be quite plausible. The problem is the transition. Workers are being asked to bet their livelihoods on a future where their “more interesting” roles appear eventually, without a clear safety net in the meantime.

What Responsibility Do AI Leaders Have?

Altman’s experience with Codex raises a tougher question than whether AI can code: what should the people building these systems do with their growing awareness of their power?

Possible responsibilities include:

Honest Communication
Not just celebrating breakthroughs, but consistently acknowledging where AI is likely to displace work, and how fast.

Policy Engagement
Working with governments and institutions to shape retraining programs, income support experiments, and safety regulations instead of simply lobbying for minimal oversight.

Corporate Practices
Using AI internally in ways that augment employees rather than immediately replacing them whenever an efficiency gain appears on a spreadsheet.

Funding Transition Pathways
Supporting educational initiatives, grants, and tools that help workers move into new roles that AI is less likely to automate.

Altman’s own discomfort could be a starting point for that conversation. Instead, the reaction on X suggests that trust between AI leaders and the public is already fragile.

Learning to Live With Superhuman Tools

Whether we like it or not, AI systems that feel “too good” at what they do are not going away. Codex and similar coding agents will keep improving. Their descendants will likely take on more responsibility, not less.

That means two parallel adaptations have to happen:

1. Technical and Economic Adaptation
Education, training, and job design must evolve so that humans work with AI instead of competing directly against it in areas where machines clearly have the advantage.

2. Psychological Adaptation
Individuals—and especially knowledge workers—will need to renegotiate their sense of identity and value. Being good at something that an AI can also do will become normal. The source of pride may shift from “I did this alone” to “I orchestrated this outcome with powerful tools.”

Altman’s moment of sadness with Codex can be read as one of the earliest public examples of that psychological adaptation beginning at the very top.

The Inevitable Backlash—And What Comes After

The harsh reaction on X is unlikely to be the last time an AI leader gets roasted for sharing doubts or unease. For now, public sentiment is heavily polarized: some are intoxicated by the potential, others are bracing for impact.

But beneath the noise, a subtler shift is happening. As AI systems permeate creative, technical, and even strategic domains, more people—CEOs and entry-level workers alike—will have their own “Codex moment,” when a machine clearly outperforms them at something they care about.

How society responds to those moments—whether with mockery, empathy, policy, or denial—will help determine whether AI becomes a tool that broadens human flourishing or a force that deepens alienation.

Altman’s confession, however clumsily received, underscores one simple reality: even the people building the future are struggling to emotionally process it. And if they feel useless and sad in the face of their own creations, it’s worth asking what everyone else is going to feel—and what we’re going to do about that.