Man Admits Using AI to Rake In $8 Million in Fake Streaming Royalties
A North Carolina resident has admitted in federal court that he used artificial intelligence and automated streaming accounts to siphon more than $8 million in music royalties meant for legitimate artists, according to the U.S. Department of Justice.
Michael Smith pleaded guilty in the Southern District of New York to one count of conspiracy to commit wire fraud after a multi‑year investigation into a sprawling online scheme. As part of his plea, Smith agreed to surrender the royalty payments he illicitly earned and now faces a maximum sentence of five years in federal prison. His sentencing hearing is scheduled for July 29.
Prosecutors say Smith built an entire catalog of tracks that were never performed or composed by human musicians. Instead, he relied on AI music tools to mass‑produce thousands of songs, generating instrumental and vocal tracks automatically. Those songs were then uploaded to streaming platforms under various aliases, labels, and artist names to make them appear legitimate.
The alleged fraud did not stop at creating music. Smith also orchestrated networks of automated user accounts-essentially bots-programmed to stream his AI‑generated catalog around the clock. Over time, those automated plays reached into the billions, triggering royalty payments from streaming services that calculate compensation based on the number of listens each track receives.
“Michael Smith generated thousands of fake songs using artificial intelligence and then streamed those fake songs billions of times,” U.S. Attorney Jay Clayton said in a statement announcing the plea. Prosecutors argue that every fake play effectively diverted money away from human artists whose music was competing for the same royalty pool.
According to court filings, Smith’s operation ran for years before being shut down, quietly drawing in millions of dollars by exploiting the opaque and highly automated nature of modern streaming platforms. The case underscores how vulnerable royalty systems can be when they rely heavily on algorithmic distribution and large‑scale data without robust verification.
How AI Supercharged the Scheme
AI music generators have become sophisticated enough to produce tracks that sound like plausible background music, lo‑fi beats, ambient soundscapes, or even full songs with vocals. Although many tools are designed for legitimate creative use, Smith allegedly treated them as a factory: define a style, click generate, upload, repeat.
This industrial approach allowed him to build a vast music library far faster and far cheaper than any human artist realistically could. The perceived diversity of that catalog-different titles, pseudonymous artist names, and genres-likely helped it evade basic scrutiny and gave the scheme an air of authenticity.
When matched with automated streaming, AI‑generated content becomes particularly dangerous. There is no need to build a fan base or market the music; bots simply “listen” in bulk, triggering royalty payouts without any real human audience.
The Mechanics of Streaming Fraud
While the case is extreme, it follows recognizable patterns used in other forms of streaming manipulation:
– Creation of thousands of fake or compromised user accounts.
– Use of scripts, software, or rented “click farms” to simulate real user behavior.
– Continuous replay of specific playlists populated by the perpetrator’s tracks.
– Distribution of those tracks across multiple artist names and labels to avoid concentration that might trigger red flags.
Platforms typically pay royalties from a shared pool that is divided based on the percentage of total plays. That means each fake stream does not just generate a small payment for the fraudster-it also slightly reduces the share available for real artists. At scale, as in Smith’s case, that impact can reach into the millions.
Why This Case Matters for Artists and Platforms
For human musicians already frustrated by low streaming payouts, this case highlights a painful reality: not only can AI‑generated music crowd platforms with near‑infinite supply, but it can also be weaponized to directly drain royalty pools. When an automated system is fooled into treating bot traffic as genuine fan engagement, it becomes a tool for theft.
For streaming services, the Smith case is a reputational and operational warning. Their business models depend on trust that plays reflect real listeners and that royalties are distributed fairly. High‑profile fraud exposed in court threatens to erode that trust and invites tougher oversight from regulators and rights organizations.
The Legal and Regulatory Signal
Smith’s guilty plea for conspiracy to commit wire fraud sends a clear signal beyond the music industry. U.S. authorities are framing AI‑assisted scams not as harmless experimentation or “growth hacking,” but as traditional financial crimes carried out with new tools.
Wire fraud charges are a go‑to for prosecutors in digital schemes because they cover the use of electronic communications and financial transfers to deceive and obtain money. By successfully securing a guilty plea, prosecutors are effectively putting would‑be imitators on notice: using AI and automation to exploit digital platforms does not live in a legal gray zone.
As AI tools become more accessible, similar cases are likely to emerge in other sectors-from advertising impressions and social media metrics to online marketplaces and content platforms. Each industry that ties money to automated metrics is a potential target.
AI in Music: Innovation vs. Exploitation
AI‑generated music itself is not illegal. Many producers and artists use AI ethically as a creative aid: generating ideas, backing tracks, or sound textures. Some labels experiment with AI‑assisted composers for film scores, games, or sound design. The line is crossed, however, when AI is used not as a tool of expression, but as an engine for deception.
The Smith case illustrates three major risk zones:
1. Scale – AI allows content to be produced at a volume no human creator can match, which makes abuse highly efficient.
2. Anonymity – Mass‑produced tracks can be hidden behind countless aliases, complicating detection.
3. Monetization Loopholes – When payout systems are driven by quantitative metrics alone, they become prime targets for gaming.
Balancing innovation with protection will require more sophisticated policies from labels, platforms, and regulators.
How Platforms Can Respond
Streaming services are under growing pressure to upgrade their defenses against AI‑driven fraud. Among the measures that can help:
– Advanced anomaly detection: Machine learning models that identify unnatural listening patterns, such as extreme repetition, unusual time‑of‑day behavior, or geographically implausible clusters of activity.
– Stronger identity checks for rightsholders: Verifying who is uploading content and whether they have rights to monetize it, especially when new or unfamiliar entities flood the system with large catalogs.
– Tiered trust systems: Providing full monetization privileges only after accounts demonstrate sustained, organic engagement, rather than immediate payouts from day one.
– Closer cooperation with labels and distributors: Sharing data about suspected fraud and standardizing protocols for flagging suspicious catalogs or listening behavior.
Implementing such safeguards is costly and complex, but cases like Smith’s demonstrate that the cost of inaction can be even higher-both financially and in terms of credibility.
The Human Cost Behind the Numbers
The headline figure of $8 million can obscure the granular impact on working musicians. Streaming income is already thin for many artists; small percentage shifts in royalty pools can determine whether they can cover basic costs such as studio time, touring, or even rent.
Every fraudulent stream essentially displaces a fraction of a cent that might have gone to an artist with a real audience. Multiply that by billions of fake plays and you end up with fewer resources flowing to the people who actually create the music listeners value.
This isn’t just an economic issue. When artists feel that the system is rigged-flooded with AI content, distorted by bots, and vulnerable to large‑scale scams-it undermines their incentive to participate and to release new work through mainstream platforms.
What This Means for the Future of AI Music
Smith’s guilty plea is likely to accelerate a policy shift around AI‑generated music. Several trends are already emerging:
– Label policies on AI content: Some labels and distributors are beginning to demand explicit disclosure when AI tools are used, or to outright decline purely AI‑generated uploads.
– Metadata and watermarking: Technical standards may evolve to tag or watermark AI‑generated audio so platforms can track and moderate its use.
– New contractual norms: Agreements between artists, labels, and services may include clauses about AI usage, fraud liability, and responsibility for monitoring abuse.
– Potential legislative action: Lawmakers are increasingly interested in regulating AI in creative industries, from transparency requirements to stronger penalties for AI‑enabled fraud.
Rather than eliminating AI from music, these trends suggest a coming era of tighter governance around how AI‑created works are identified, monetized, and audited.
Lessons for Creators and Tech Builders
For legitimate artists, understanding how such schemes operate is important, not to copy them, but to recognize warning signs: sudden unexplained spikes in streams, unusual demographic patterns, or associations with shady distributors. Reporting suspected fraud can help protect royalty pools and maintain a more level playing field.
For developers building AI music tools, the case is a reminder that technology will be judged not just by what it can do, but by how easily it can be abused. Integrating safeguards-rate limits, usage monitoring, clear terms against fraud-may become a competitive advantage, not just a legal shield.
For entrepreneurs designing new streaming or royalty systems, this is a cautionary tale: any model that ties money to easily manipulated metrics should assume that someone will try to game it, especially when AI can create content and behavior at industrial scale.
—
Smith’s conviction marks one of the clearest examples to date of AI being used to orchestrate large‑scale financial fraud in the music business. By turning generative technology and automated accounts into tools for siphoning off more than $8 million, he exposed vulnerabilities at the core of the streaming economy.
How regulators, platforms, and the music industry respond now will shape whether AI becomes primarily a force for creative expansion-or a recurring weapon in the hands of fraudsters who see digital systems as targets to be exploited.
