Coinbase is betting that the next big phase of artificial intelligence won’t just be about bots generating text or images—it will be about software agents that can actually move money. To prepare for that shift, the company has rolled out a new product called Agentic Wallets, a wallet infrastructure specifically designed for AI agents, with tight security controls and guardrails built in from the ground up.
Unlike consumer-facing crypto wallets, Agentic Wallets are not meant for humans clicking buttons in a mobile app. They are purpose-built accounts that AI agents can control programmatically, under a set of explicit rules defined by the developer or business that deploys them. Coinbase describes the product as a payments and custody layer that plugs directly into its existing compliance and security stack, rather than yet another AI framework or toolkit.
“It’s not an SDK, it’s not a library—it’s a purpose-built wallet to work with an agent as quickly as possible,” explained Erik Reppel, head of engineering for Coinbase Developer Platform. In other words, the system is not itself an AI model, but a specialized financial interface that AI agents can learn to use reliably and safely. Developers can attach it to their autonomous agents so those agents can initiate real blockchain transactions—within strict limits.
Agentic Wallets run on Base, Coinbase’s Ethereum layer-2 network. That choice is strategic: Base offers lower transaction costs and faster confirmations than Ethereum mainnet, which is crucial if autonomous agents are going to execute frequent microtransactions, manage subscriptions, or interact with decentralized applications on behalf of users. At its core, the product is a technical toolkit and infrastructure layer tailored to the way AI models reason, rather than a consumer wallet dressed up with AI branding.
A central design priority is isolation. Each AI agent is given access to a sandboxed, self-custodial wallet environment where private keys are kept strictly separate from the agent’s broader runtime context. The goal is to reduce the blast radius if an agent is compromised, misconfigured, or tricked by malicious prompts. The AI can request on-chain actions, but it never directly handles the keys themselves.
This sandboxing is crucial in light of the growing catalogue of “prompt injection” and “jailbreak” attacks, where clever instructions cause otherwise-aligned models to behave in unexpected ways. If an agent with full key access is convinced to send all funds to an attacker’s address, there is no undo button. Coinbase’s architecture aims to prevent that single point of catastrophic failure by placing a guardrail layer between model decisions and irreversible transactions.
Those guardrails are policy-based. Developers can define what an agent is allowed to do: maximum transaction amounts, daily or hourly spending caps, allowed destination addresses or contract types, and even context‑aware rules—for example, blocking new counterparties until they pass certain checks. The wallet infrastructure enforces these policies at execution time, regardless of what the AI model “decides” to attempt.
That approach reflects a broader shift: as AI agents move from simulated environments and testnets into real economic activity, security can no longer be an afterthought. The risk profile is very different from that of a human-operated wallet. An AI agent can operate 24/7, react instantly to external events, and be targeted at scale by adversarial inputs. Guardrails must therefore be both automated and robust, or the system will be unmanageable.
Coinbase is also leaning on its existing compliance and monitoring capabilities. Because Agentic Wallets are embedded into the same infrastructure that supports Coinbase’s institutional and retail custody products, they inherit transaction monitoring, risk scoring, and other protections that regulators expect around digital assets. For enterprises experimenting with AI agents that handle customer funds, this alignment with established compliance frameworks may be as important as the technical design.
The timing of this launch is not accidental. The industry is moving from “chatty” AI—bots that answer questions—to “agentic” AI, where systems can take actions: pay invoices, allocate funds across DeFi protocols, manage payroll in stablecoins, or automatically purchase digital goods. Each of those use cases requires a trustworthy way to hold and move assets. Traditional wallets, designed around human UX and manual approvals, are ill-suited for that.
Consider a few emerging scenarios for AI-native wallets:
– An AI travel assistant that not only finds flights but books them, pays in crypto, and handles refunds.
– A trading bot that rebalances a portfolio across on-chain assets within predefined risk parameters and spending caps.
– A game agent that autonomously buys, sells, or rents in-game items represented as NFTs, within a budget set by the player.
– A subscription manager agent that pays recurring fees for software or content access, pausing or canceling based on usage.
In each case, the end user or business wants the benefits of autonomy—speed, availability, and continuous optimization—without ceding unlimited control over funds. Agentic Wallets are Coinbase’s answer to that tension: give agents the tools to participate in on-chain commerce, but constrain them with policy, monitoring, and revocability.
Self-custody is another deliberate choice. While Coinbase provides the infrastructure and guardrails, the model is structured so that the keys associated with a given agent’s wallet are not commingled with centralized exchange balances. That aligns with the crypto ethos of minimizing custodial risk while still tapping into institutional‑grade security practices. For companies building AI agents, it also simplifies integration with more decentralized services on Base and beyond.
Of course, introducing agent-controlled wallets raises new governance questions. Who sets the rules for an AI agent’s spending? How are those rules updated, and who has override powers when something goes wrong? Coinbase’s approach implicitly assumes a layered control model: human owners or administrators define policies, the infrastructure enforces them, and the AI operates within that fenced environment. Revocation mechanisms, such as freezing a wallet or tightening limits, become critical incident-response tools.
There is also an educational and UX challenge. Developers who are used to working with AI APIs must now think like risk managers: classify transaction types, reason about limits, and design fallback behaviors when the wallet refuses an action. On the crypto side, teams that understand on-chain risk must learn how large language models and other AI systems can be manipulated. Agentic Wallets sit precisely at this intersection, and success depends on both worlds talking to each other.
From a market perspective, Coinbase is trying to position Base as a natural home for AI-native economic activity. Low fees and high throughput are important, but so is a coherent story about safety, compliance, and developer tooling. If autonomous agents become a mainstream pattern, the networks that support them securely could see sustained demand from both startups and large enterprises experimenting with AI-driven financial flows.
At the same time, this development underscores a broader trend: financial infrastructure is being redesigned with machine users in mind. Traditionally, payment systems assumed a human at the end of every transaction flow, clicking accept or typing a PIN. With AI agents, the “user” is software that must be constrained by code-based rules, not terms-of-service documents. Products like Agentic Wallets are early examples of that machine‑first design philosophy.
There are real risks, and they will not be solved by any single product. AI agents could still be misconfigured, given too much authority, or allowed to interact with malicious contracts. Poorly written policies could leave gaps that attackers exploit. Social and regulatory questions about liability—who is responsible when an AI agent misdirects funds—remain open. Coinbase’s guardrails reduce some of the technical and operational dangers, but they do not eliminate the need for careful architecture and oversight by developers and organizations.
For teams considering building AI agents that touch real money, several practical considerations follow from Coinbase’s move:
– Start with strictly limited authority: small transaction caps, whitelisted counterparties, and explicit human review for sensitive actions.
– Treat wallet policies as code that needs version control, testing, and security review—not as a last-minute configuration tweak.
– Log every attempted and blocked action by the agent to enable forensic analysis and model improvement.
– Assume models can be tricked and design wallet rules on that assumption, not on idealized behavior.
Coinbase’s Agentic Wallets are, in effect, an attempt to make such best practices easier to adopt by baking them into a single, integrated system. As the line between AI and finance continues to blur, infrastructure that can safely translate agent decisions into on-chain actions is likely to become a critical layer of the new digital economy.
