Amazon has issued a cease-and-desist letter to Perplexity AI, demanding the immediate suspension of its AI agent, known as Comet, from conducting transactions on the Amazon platform. This confrontation marks one of the first high-profile disputes surrounding the growing use of autonomous AI-powered web agents in e-commerce environments.
According to the letter, Amazon alleges that Perplexity’s AI tool misrepresents itself by operating bots that mimic human behavior to make purchases, thereby violating Amazon’s terms of service. The tech giant also claims that the use of such agents compromises both the integrity of its platform and the privacy of its users. Specifically, Amazon asserts that Comet performs actions on behalf of users without making it clear to the site that it is an automated tool, which they argue misleads their systems and could lead to fraudulent activity or data misuse.
The company further states that Perplexity’s AI agent has a detrimental impact on the user experience by interfering with how Amazon’s systems are designed to interact with real human customers. By automating purchases and navigating the site programmatically, Comet may be bypassing or manipulating user interface elements intended for human decision-making, which Amazon says undermines the trust and functionality of its platform.
Perplexity AI, however, strongly refutes the allegations. A spokesperson for the company dismissed Amazon’s accusations as baseless and characterized the cease-and-desist as an attempt to stifle innovation. “Amazon’s claims are typical legal bluster and completely unfounded,” the spokesperson said, arguing that consumers should have the freedom to choose digital tools, including personal assistant bots, that facilitate their online shopping experiences. They likened Amazon’s stance to a store demanding that customers only use in-house personal shoppers, suggesting that such control over user choice is anti-competitive.
The dispute has ignited a broader conversation about the role of AI agents in online commerce. As AI tools become more capable of navigating websites, comparing prices, and making purchases autonomously, major retailers are beginning to define the boundaries of acceptable use. Some argue that AI agents can empower consumers by saving time and making more informed purchasing decisions, while others warn that these agents may disrupt retail ecosystems, strain server resources, and exploit pricing algorithms.
This situation also shines a light on the growing tension between platform control and consumer automation. Retailers like Amazon have built highly optimized systems that rely on predictable consumer behavior, and the emergence of AI intermediaries introduces variables that companies may view as destabilizing. These AI agents can potentially scrape massive amounts of data, simulate human interactions, and even leverage dynamic pricing in ways that human users cannot, raising concerns about fairness and sustainability.
From a technical standpoint, it is unclear exactly how Comet operates under the hood. However, experts suggest that such AI agents likely use advanced web scraping techniques, natural language processing, and decision-making algorithms that enable them to parse product listings, add items to carts, and complete checkouts—all with minimal or no human involvement. Amazon’s terms of service explicitly prohibit automated access that mimics human behavior for commercial purposes, which may be the crux of the legal dispute.
Legal analysts suggest that this case could set a precedent for how AI agents are treated under digital commerce laws. If Amazon pursues further legal action or updates its policies to more aggressively block AI agents, it may influence how other e-commerce platforms address similar challenges. At the same time, AI developers may need to rethink how their bots interact with web services to avoid future conflicts.
In the broader landscape, this conflict touches on critical issues of transparency, consent, and control. Should users be allowed to delegate their digital presence to a machine? Should websites have the right to block such delegation if it affects their infrastructure or business model? These are questions that regulators and courts may soon need to address.
The rise of agentic AI — autonomous systems capable of taking actions on behalf of users — is only beginning. As consumers increasingly rely on digital assistants to manage tasks from shopping to scheduling, the importance of defining ethical and legal frameworks for these tools grows. Companies like Perplexity are at the forefront of this shift, pushing the boundaries of what AI can do, while tech giants like Amazon are drawing lines to protect their ecosystems.
Going forward, we may see more companies developing their own AI agents or offering APIs that support third-party AI integration in a controlled manner. Rather than banning external AI tools outright, platforms could consider creating secure, transparent protocols that allow AI agents to interact within specified guidelines. This could lead to a new generation of AI-compatible e-commerce, where innovation and compliance go hand-in-hand.
In the meantime, the clash between Amazon and Perplexity underscores the need for dialogue between AI developers and platform providers. With the pace of AI advancing rapidly, both sides must find ways to coexist — balancing innovation, user empowerment, and platform integrity.
