AI Utopianism Masks Tech Billionaires’ Fear
For media theorist Douglas Rushkoff, the shiny rhetoric around artificial intelligence as a path to a flawless future is less a noble vision than a protective veil. Behind the grand language of “transforming humanity” and “saving the world,” he argues, lies something far more basic: fear, and a scramble by the ultra-rich to secure their own survival if things go wrong.
Rushkoff, a professor of media theory and digital economics at Queens College, CUNY, and author of *Survival of the Richest* and *Team Human*, laid out this view in a recent conversation on the Repatterning Podcast with host Arden Leigh. His critique targets the tech billionaire class that loudly promotes AI as a universal good, while quietly preparing for the possibility that the very systems they build could destabilize society.
“The billionaires are afraid of being hoisted on their own petard,” Rushkoff said. “They are afraid of having to deal with the repercussions of their actions.”
In his view, the public narrative and the private behavior of many tech leaders are fundamentally out of sync. On stage and in interviews, they sell AI as a tool of limitless progress; offstage, according to numerous reports he referenced, some of the same figures—including high‑profile executives like Mark Zuckerberg and Sam Altman—have shown interest in remote compounds, bunkers, and other forms of “insurance” against social breakdown.
This duality is at the heart of what Rushkoff calls an “elitist exit strategy.” AI utopianism, he suggests, functions as a convenient story: it reassures markets, keeps employees inspired, and frames critics as pessimists who “don’t get it.” At the same time, that narrative obscures the extent to which the wealthiest participants are planning for scenarios in which technology accelerates inequality, destabilizes labor markets, and strains critical infrastructure.
A key part of the illusion, according to economists and technologists aligned with this critique, is the way AI hype glosses over basic economic realities. Massive AI models require staggering amounts of energy, water, physical hardware, and continuous maintenance. Training and operating them is not an abstract, magical process “in the cloud,” but a material one that depends on data centers, rare minerals, and extensive supply chains. Those costs are typically externalized—pushed onto local communities, public utilities, and the environment—while the gains are captured by a relatively small circle of firms and investors.
Labor displacement is another hidden layer beneath the optimistic surface. AI is advertised as a tool that will “free humans from drudgery,” but Rushkoff and others note that the real question is: free whom, and on what terms? Automating customer support, content moderation, logistics, and back‑office functions allows corporations to cut headcount and boost margins. That does not automatically translate into shorter workweeks, higher wages, or better conditions for everyone else. Without deliberate policy and social planning, the benefits of productivity improvements tend to accrue to owners of capital, not to the workers whose roles are diminished or erased.
This is where the rhetoric of utopia becomes especially useful for the billionaire class. By framing AI as an inevitable, almost spiritual force for good, they can sidestep practical conversations about power, ownership, and distribution. If “AI will take care of it,” then questions about universal basic income, labor protections, public AI infrastructure, or democratic oversight can be deferred indefinitely. The story becomes: progress is happening, the details will work themselves out, and doubters are simply afraid of the future.
Rushkoff’s argument flips that framing on its head. In his telling, it is not critics who are most afraid of the future, but the tech elites themselves. Their fear is not of AI per se, but of accountability—of being forced to live in the world their own incentives helped shape. In *Survival of the Richest*, he recounts how wealthy executives have asked him questions not about how to fix society, but how to insulate themselves from it: how to manage private security if the dollar collapses, how to maintain control in isolated compounds, how to keep “the masses” at bay in a crisis.
From this perspective, extreme AI optimism functions almost like a personal brand of absolution. If the narrative is that they are heroically pushing civilization forward at immense personal risk, then social disruption and inequality can be written off as unfortunate side effects of progress. The more spectacular and world‑changing the promise, the easier it becomes to justify concentration of power and wealth as necessary for “innovation.”
The reality, however, is more mundane and more political. AI does not arrive as a neutral destiny; it is being shaped by corporate strategies, regulatory choices, and underlying economic structures. Decisions about who owns core models, who controls access to computing power, what data may be harvested, and how automation is taxed or regulated will determine whether AI deepens existing divides or supports broad‑based prosperity.
Rushkoff’s critique raises a set of practical questions that cut through utopian marketing:
– Who actually benefits from AI productivity gains—shareholders, executives, or the broader public?
– How will displaced workers be supported, retrained, or compensated?
– What safeguards exist to prevent AI from amplifying surveillance, manipulation, and monopolistic control?
– Who pays for the energy, water, and environmental toll of large‑scale AI deployment?
– How transparent and accountable are the companies building these systems?
Without clear, enforceable answers, the story of AI as a universal good starts to look more like a sales pitch than a social contract.
At the same time, Rushkoff is not arguing for abandoning technology altogether. His wider body of work consistently stresses that tools are not inherently liberating or oppressive; what matters is the human context in which they are deployed. AI can augment human creativity, improve scientific research, and streamline essential services. But that potential will only be realized if society insists on human‑centered design, democratic governance, and constraints on extractive business models.
One of the dangers he highlights is the temptation to outsource not just work, but responsibility, to machines. If decisions about hiring, lending, policing, or welfare are increasingly delegated to opaque AI systems, then moral accountability becomes diffused. It becomes easier for institutions and executives to say “the algorithm decided,” even when those algorithms were trained on biased data or optimized for profit over fairness. Fear of backlash or legal exposure may be one reason some leaders prefer to hide behind the rhetoric of inevitability rather than engage in open ethical debate.
Another overlooked dimension is the fragility of the infrastructure that underpins so‑called “intelligent” systems. Large‑scale AI relies on constant connectivity, reliable grids, stable geopolitics, and just‑in‑time supply of specialized hardware. Any significant disruption—environmental, political, or economic—could undermine those foundations. Here again, Rushkoff’s point about bunkers and exit strategies becomes telling: if the people closest to the technology are planning for large‑scale instability, that should prompt wider scrutiny of the systems they are rushing to deploy.
A more honest conversation about AI would admit both its promise and its risks without resorting to mythmaking. That means acknowledging that:
– Some jobs will be automated away, and not all workers will smoothly transition to “higher‑value” roles.
– Massive investment in public education, reskilling, and social safety nets will be required to prevent widening inequality.
– AI’s environmental footprint is non‑trivial and must be factored into any realistic assessment of its benefits.
– Power imbalances between giant tech companies and everyone else are likely to grow unless actively countered.
Instead of treating AI as a salvation narrative, Rushkoff’s perspective suggests treating it as infrastructure: essential, powerful, but ultimately a public concern, not just a private asset. That implies stronger regulation, more open standards, and serious debate about public or cooperative ownership of key components—models, data, and compute.
For individuals and organizations trying to navigate the AI era, this critique offers two main takeaways. First, be skeptical of sweeping promises that skip over concrete trade‑offs. When AI is described as an inevitable leap toward a better world, ask who is defining “better,” and for whom. Second, focus on agency: rather than waiting for tech elites to “deliver” a future, citizens, workers, policymakers, and smaller innovators can push for arrangements that keep human dignity, autonomy, and fairness at the center.
Rushkoff’s central claim is not that AI is doomed, but that uncritical AI utopianism mainly serves those already at the top. Their fear of living with the full consequences of their own systems leads them to pursue escape—through private fortresses, secret contingency plans, and comforting stories about progress—rather than repair. Exposing that fear is, in his view, the first step toward reclaiming the future of technology as something we shape together, instead of something done to us from above.