Web2 took our data, web3 exposed it: why the next internet must put users first

Web2 captured our data, web3 laid it bare — the next internet must finally put users in charge.

For years, the dominant narrative painted the internet as a benign “convenience machine”: faster search, personalised feeds, free apps financed by ads that stayed politely in the background. In reality, we sleepwalked into a quiet transfer of power. Control shifted from individuals to platforms, from autonomy to extraction, from real consent to a performance of consent wrapped in UX tricks.

Today’s internet does not simply carry our activity — it scrutinises it. Every tap, scroll, purchase, GPS ping, hesitation on a video, late-night search, or half-written message is captured, correlated, and fed into models we never knowingly opted into. Our data is no longer a by-product; it is the primary fuel of a surveillance economy so sophisticated that it can infer things many of us have never said aloud.

These inferences are not superficial. They reveal political leanings, hint at sexual orientation, pick up on signs of depression or burnout, anticipate relationship conflict, estimate financial stress, and pinpoint exactly which emotional triggers will make us click, buy, or stay. The most powerful digital platforms did not rise on the strength of better features alone. They rose by building the most detailed, dynamic dossiers on billions of people.

And we normalized it. The erosion of agency did not arrive as an emergency or a scandal; it arrived as cookie banners, frictionless sign-ins, nudges, dark patterns, and terms-of-service pages nobody read but everyone accepted. We traded away control in tiny, incremental steps until the loss felt invisible.

Then AI arrived — and quietly amplified the problem.

Generative AI is packaged as a helpful assistant: drafting emails, summarising documents, generating images, brainstorming ideas. But behind the friendly tone is an extractive logic more advanced than anything web2 ever deployed. To function, these systems feed on our most intimate digital exhaust: prompts, private conversations, work documents, diaries masquerading as chats, photo libraries, emotional outbursts, late-night fears, and the metadata that stitches it all together.

Most people use AI tools as if they were private notebooks or trusted confidants. They are neither. The largest AI providers systematically collect, store, analyse, and train on precisely the material users assume is ephemeral and confidential. Our questions and vulnerabilities are not simply answered; they are ingested.

The result is unprecedented: for the first time, not just companies, but learning systems themselves are internalising our boundaries, triggers, insecurities, curiosities, and coping mechanisms. If web2 hollowed out privacy by hoarding data, AI hollows it out by modelling our inner lives. We are moving into a world where machines understand our likely behaviour not because we disclosed our identity, but because we left enough fragments for them to assemble a model of us that can be more predictive than our own self-image.

Crypto appeared as a philosophical counterstrike to this concentration of power. It promised self-sovereignty — genuine ownership of money, identity, and data. It offered an escape from opaque intermediaries and rent-seeking platforms. But in its first mainstream incarnation, web3 over-corrected in the opposite direction. In trying to eliminate the need for trust, it encoded radical transparency into everything.

Blockchains turned human behaviour into open ledgers. Wallet flows, transaction histories, social networks, saving and spending patterns, investment choices — all publicly traceable, in many cases forever. The result is a new paradox: the stack that was meant to empower individuals also created an almost perfect environment for analytical surveillance. Today, chain analysis firms can build financial and social profiles with precision many banks and advertisers could only fantasise about.

Web2 harvested our data. Web3 spotlighted it. In different ways, both sidelined the user’s right to decide who sees what, when, and under which conditions. The conclusion is not that decentralisation failed and must be abandoned; it’s that it was architected without a robust theory of privacy and choice.

At the core of both eras lies a deceptively simple design flaw: users do not truly control what others can see or derive about them. Permissions are typically binary and coarse. Data is either public or buried in proprietary silos. Once something is shared or put on-chain, it often becomes permanent, searchable, and reusable in ways the original user never imagined.

The next internet must flip this logic at the protocol level. Instead of treating privacy as an add-on — an app feature, a browser extension, a plug-in, or a thin encryption wrapper — privacy has to be a property of the infrastructure itself. That means shifting from selectively encrypting certain fields or addresses to encrypting the entire computational stack: state, storage, logic, user interactions, and execution.

When encryption is embedded at the protocol layer, the design space changes dramatically. Computation can remain verifiable and composable, but the underlying data is not universally visible by default. This enables what can be described as “smart transparency”: a world where the default state is encrypted and opaque, and visibility becomes an intentional, scoped act by the user or application, rather than a permanent, irrevocable exposure.

Under smart transparency, several principles emerge:

– Privacy becomes the baseline, not the exception.
– Transparency becomes granular and programmable, not all-or-nothing.
– Access is time-bound and context-aware, not perpetual.
– Identity can be proven through credentials and proofs, not raw data dumps.

Crucially, this doesn’t have to come at the expense of developer freedom. Programmability can be preserved through cryptographic techniques that allow code to run on encrypted inputs while still producing verifiable outputs. Developers can still build complex applications; they just no longer need to see every detail of user data in the clear to do so.

Users, meanwhile, regain meaningful agency. They can decide which attributes to reveal, to whom, and for what purpose. They can participate in financial systems, gaming, social networks, and AI ecosystems without surrendering a permanent, legible record of their behaviour to anyone with a block explorer or an analytics dashboard.

A persistent misunderstanding about privacy is the belief that people want to disappear. That isn’t accurate. Most individuals are comfortable being seen — by friends, collaborators, regulators, or counterparties — when there is a clear purpose and a clear boundary. People don’t want invisibility; they want selectivity. They want to decide when to be anonymous, when to be pseudonymous, and when to be fully known.

This distinction matters for the future of digital identity. The next iteration of the internet should be able to support:

– Selective disclosure: proving you are over 18, solvent enough for a transaction, or licensed to perform a task without exposing everything else about your finances or background.
– Contextual personas: using one identity for professional life, another for gaming, another for activism — all cryptographically bound to you, but not trivially linkable by default.
– Revocable access: granting a service the right to use certain data for a defined period or function, and then rescinding that right in a way that is enforceable by the underlying protocol.

In such a model, “data ownership” stops being a slogan and becomes a set of enforceable primitives. It is not just that your files or keys belong to you; it is that computation on your data happens under rules you define, within cryptographic boundaries no single company can quietly bypass.

This shift is also essential for AI. If we continue on the present trajectory, AI assistants will become the most intimate data sinks in history: always-on confidants that double as training pipelines for corporate models. A user-controlled internet could invert that relationship. AI systems could run over encrypted personal data, generating useful insights or assistance without exfiltrating raw content back to central servers or folding it into global training datasets by default.

Imagine AI that lives closer to the user than to the platform: personal models that learn your preferences on your device or within your encrypted environment, while only sharing aggregated, consented signals outward when you explicitly agree. In that world, AI augments autonomy rather than eroding it.

There is also a regulatory dimension. Laws attempting to enforce privacy and data rights on top of an architecture that was never designed for user control will always feel reactive, slow, and incomplete. When data can be copied endlessly and siphoned invisibly, asking users to read more consent screens is a losing game. By contrast, when the infrastructure starts from encrypted-by-default, compliance and rights protection can be enforced technically, not just legally.

Economically, user-controlled data also opens new models. Instead of platforms extracting value from behavioural data behind closed doors, users could choose to share certain insights or anonymised aggregates on their own terms, potentially being compensated or at least fully informed. Markets could form around opt-in data collaboration, where privacy is preserved but value is still created.

This is not a call for a nostalgic return to a simpler web. The next internet will be more complex, more intelligent, and more interconnected than anything we have now. But complexity is not an excuse to abandon user rights; it is a reason to encode those rights more deeply into the stack.

The path forward is not about rejecting AI, dismantling web3, or idealising a pre-platform era. It is about acknowledging that the first two major waves of the internet optimised for scale and efficiency at the expense of sovereignty — and deliberately choosing a different trade-off for the third. We can build systems where privacy and programmability coexist, where decentralisation does not equate to radical exposure, and where intelligence does not require unrestricted intimacy.

Rebuilding the internet is not just a technical project; it is a moral and political one. The question is no longer whether users have something to hide. The question is whether they have the unquestioned right to decide what to reveal, to whom, and under which rules.

In the next era of the internet, user sovereignty should not be an aspirational tagline. It must become the operational norm.