New york Ai advertising law demands clear disclosure of synthetic digital performers

New York has carved out a national first in the AI era: advertisers in the state must now clearly disclose when digital performers in their commercials are generated by artificial intelligence. The move comes at the exact moment a new executive order signed by President Donald Trump threatens to punish states that adopt what his administration calls “onerous” AI regulations by cutting off certain streams of federal funding—setting up a likely legal and political confrontation.

Governor Kathy Hochul signed two AI-focused bills at the New York headquarters of SAG-AFTRA, the powerful performers’ union that has become one of the most vocal forces in U.S. politics on AI and digital likeness rights. Together, the measures aim to inject transparency into AI use in advertising and to safeguard the likeness and voice of deceased performers from being exploited without authorization.

Under the new advertising disclosure law, companies must inform viewers when an actor, model, or on-screen “performer” they see in an ad is not a human being but a synthetic creation—an AI-generated character or a digitally replicated persona. That requirement covers television spots, online video campaigns, and other commercial content distributed in the state’s jurisdiction.

The second bill targets a rapidly emerging and contentious issue: the posthumous use of performers’ identities. It creates protections for digital replicas of deceased actors, singers, and other public figures, requiring consent from their estates before their faces, voices, or mannerisms can be recreated or manipulated using AI for new projects or commercial gain. Violations could expose advertisers, studios, or technology companies to legal claims over misappropriation or unauthorized exploitation.

SAG-AFTRA has hailed the New York legislation as a model for what it wants to see implemented nationally. The union is simultaneously lobbying in Washington for passage of the so-called “No Fakes Act,” a proposed federal law that would give performers and other individuals a clear, enforceable right to control AI-made replicas of their image and voice and to sue those who deploy such replicas without permission or compensation. Union leaders argue that without such guardrails, AI technologies could gut creative livelihoods and flood the market with synthetic content indistinguishable from the real thing.

New York officials framed the disclosure rule as a basic truth-in-advertising standard updated for the AI age. If a human viewer cannot easily tell that a smiling family, a charismatic spokesperson, or a celebrity endorsement is actually algorithmic fiction, the state now says audiences have at least a right to be told. In industries like beauty, fitness, and financial services, regulators worry AI “actors” could be misused to fabricate testimonials, overpromise results, or fake endorsements with very low cost and very high persuasive power.

Advertising agencies and brands are already experimenting with AI-powered “virtual influencers” and synthetic spokespersons that never age, never get sick, and can be localized and customized at massive scale. New York’s law doesn’t ban that innovation, but it does force companies to surface the fact that audiences are engaging with a digital construct. From a marketing standpoint, that disclosure could become a reputational issue: some brands may proudly lean into their use of AI, while others may worry that “synthetic cast” labels undermine authenticity and trust.

The enforcement challenge will be significant. Regulators will have to determine how to verify when a performance is fully AI-generated, partially enhanced, or simply edited using traditional digital tools. Hybrid content—where a real actor’s face, voice, or body is subtly altered or extended with generative tools—may be particularly tricky. The law will likely push agencies, production companies, and tech vendors to maintain more detailed documentation about how creative assets are produced, in case questions arise.

The protections for deceased performers touch on another sensitive frontier: the ethics of resurrecting the dead for entertainment and advertising. Until now, estates have often had to fight case by case over the use of archival footage, digital doubles, or sound-alike performances. With AI capable of cloning a voice from a few seconds of audio or rebuilding a face from limited imagery, the risk of unauthorized “digital necromancy” has become much more practical and much cheaper. New York’s statute seeks to draw a line: nostalgia is fine if it’s licensed and transparent, but not if technology is used to fabricate a performance that the original artist never agreed to give.

The legal collision with the Trump administration’s executive order could be profound. By threatening to withdraw or restrict certain federal funds from states that adopt stringent AI rules, the White House is signaling it wants a light-touch, innovation-first regulatory environment. New York is moving in the opposite direction, betting that strict transparency and strong personality-rights protections are necessary to prevent abuse and preserve public trust. That clash raises constitutional questions about states’ rights, federal overreach, and the balance between fostering new industries and regulating them.

If the administration follows through on its funding threat, the dispute is likely to be fought in court. New York could argue that it is exercising its traditional authority over advertising, consumer protection, and rights of publicity—areas long managed at the state level. Civil liberties and creative-industry advocates may also weigh in, arguing that states must be free to shield residents from deceptive or exploitative uses of advanced technologies, regardless of federal economic policy priorities.

Other states will be watching closely. California, with its own deep ties to entertainment and tech, has already begun debating AI-likeness protections and deepfake rules. If New York’s approach survives a federal challenge, it could become a template for a patchwork of similar laws across the country. If, however, courts side with the executive branch, states may find their options for regulating AI in advertising and media sharply constrained unless and until Congress enacts a comprehensive federal solution.

For brands and agencies, the immediate implication is operational. Campaign planning now has to factor in AI disclosure requirements state by state. Some advertisers may choose to apply New York’s standards nationwide for simplicity, effectively turning the state’s rules into a de facto national baseline. Others might try to geo-target different versions of ads, disclosing AI usage only where legally required—a strategy that could be both technically complex and reputationally risky if viewers compare notes across jurisdictions.

Performers, meanwhile, are likely to use the new laws as leverage at the bargaining table. Actors and voice artists have already pushed for contract clauses restricting AI training on their performances and limiting digital replication. The existence of statutory protections—especially for posthumous rights—strengthens their hand in negotiations with studios, streaming platforms, and advertisers who want broad digital rights in perpetuity. Young artists entering the industry may become far more cautious about signing away “all media, known and unknown” rights as AI tools become ubiquitous.

There are also broader cultural questions that New York’s move forces into the open. How much synthetic reality is acceptable in the stories and commercial messages that shape public perception? Should there be a clear line between creative enhancement—smoothing wrinkles, adjusting lighting, minor edits—and full synthetic fabrication of people who never existed or performances that never took place? By demanding disclosure, lawmakers are implicitly saying that audiences should be able to distinguish between those categories, rather than having them blur together invisibly.

From a technological perspective, the laws may accelerate the development of AI-detection and provenance tools. If regulators, unions, and courts need to know whether a performance is AI-generated, companies that can trace content origin, verify authenticity, and flag synthetic media stand to gain. Watermarking, cryptographic signatures attached during production, and standardized metadata frameworks could all become part of the compliance toolkit, particularly for major ad networks and large brands that prefer predictable risk over regulatory surprises.

In the long term, the tension between innovation and regulation in AI advertising will not be resolved by one state statute or one executive order. New York’s actions, combined with union pressure for a federal No Fakes Act and the administration’s pushback against “burdensome” rules, mark the beginning of a more fundamental debate: who controls identity in the age of generative AI, and how transparent must synthetic media be? The answers will shape not only the business of advertising and entertainment, but also public expectations about what is real—and who gets to decide.