AI Disclosure in 2026: Recent Developments and Practical Steps for Brands and Influencers

By Brooke Watson, Dynamis LLP | April 2026

When we published AI Disclosure Laws Are Coming: What Brands/Influencers Need to Know in October 2025, the regulatory landscape was already moving fast. Since then, it hasn't slowed down. Laws that were proposed are now signed. Deadlines that seemed distant are weeks away. And the cost of non-compliance, legal, reputational, and commercial, is no longer theoretical.

Here's what's changed, and what it means for your team right now, whether you're a brand, an agency, or a creator.

EU AI Act: The August 2 Deadline Is Almost Here

When we first covered Article 50 of the EU AI Act, phased enforcement was just beginning. The deadline is now close: as of August 2, 2026, transparency obligations for AI-generated and AI-manipulated content apply in full across all sectors.

What compliance looks like in practice has become considerably clearer. In December 2025, the European Commission published the first draft Code of Practice on Transparency and Marking of AI-Generated Content; a second draft followed on March 3, 2026. The Code is voluntary but is designed to serve as a compliance roadmap for Article 50's labeling requirements, and its development signals how regulators expect the rules to be applied.

Under Article 50 itself, providers of AI systems generating synthetic outputs must mark those outputs in machine-readable form, and deployers, which includes brands and agencies publishing the content, must disclose deepfakes and certain AI-generated public-interest content to end users. The Code of Practice proposes a layered approach, including design and placement requirements for labels, icons, or disclaimers. The draft code points toward prominent user-facing disclosure rather than disclosure buried only in back-end metadata, although the final guidance is still being finalized.

Brands and creators selling into or advertising in the EU, regardless of where they are based, are within the law's reach as deployers. Penalties for non-compliance remain as we reported: up to €15 million or 3% of global annual turnover, whichever is higher.

FTC: Existing Endorsement and Deception Law Applies Now

In the U.S., the most important framing point is this: there is no standalone federal AI disclosure statute. What applies today is the FTC's existing consumer protection and endorsement framework, applied to AI uses.

Under the FTC's Endorsement Guides and Section 5 of the FTC Act, endorsements must reflect the honest opinion of the endorser, material connections must be disclosed clearly and conspicuously, and you cannot talk about a product experience you have not actually had. The FTC has not adopted an AI-specific ban on virtual endorsers; instead, existing deception, endorsement, and fake-testimonial rules apply.

The practical implication is the same whether the legal hook is deceptive acts and practices or a future AI-specific rule: disclosure must be clear, conspicuous, and hard to miss. A hashtag buried among a dozen others does not meet that standard. For brands and influencers operating today, the safe path is to apply the FTC's clear-and-conspicuous standard to AI-generated content as a matter of risk management, with the understanding that New York now provides a clearer statutory hook when a covered synthetic performer is used in advertising (see below).

California: Two Laws, Two Distinct Scopes

Two California bills signed in September 2024 are relevant here, and it matters to keep them distinct.

AB 1836 amends Civil Code Section 3344.1, which governs the use of deceased personalities' names, voices, photographs, and likenesses. The amendments expanded protections against unauthorized AI-generated digital replicas of deceased individuals in expressive audiovisual works and sound recordings. This law is relevant to brands or creators producing content that depicts or evokes deceased performers or public figures, but it is not a general disclosure statute for AI use in advertising.

AB 2602 adds Section 927 to the Labor Code and addresses the contract enforceability side: certain contract provisions allowing the creation and use of a digital replica of a living performer's voice or likeness in place of work they would otherwise have performed in person are unenforceable, unless specific requirements are met, for new performances fixed on or after January 1, 2025. This is a labor and contracts statute, not a consumer-facing disclosure law, but it directly affects talent agreements, influencer contracts, and any deal structure that involves licensing a living person's AI-generated likeness for performances.Both laws apply in California as of their respective operative dates. Both are relevant to brands and creators working with real people's likenesses, and both should be reflected in your talent and contract review.

New York's Synthetic Performer Law: Signed, Effective June 9

In our original article, we described New York's Synthetic Performer Disclosure Bill as proposed legislation. It is now law. Governor Hochul signed A8887-B on December 11, 2025, as Chapter 617 of the Laws of 2025, amending General Business Law Section 396-b. The law takes effect June 9, 2026, eight weeks from now.

The core requirement: any person who produces or creates an advertisement must conspicuously disclose the use of a synthetic performer where that person has actual knowledge of its use. A "synthetic performer" is defined as a digitally created asset using generative AI or a software algorithm intended to create the impression it is a human performer who is not recognizable as any identifiable natural person.

Several carveouts matter in practice. The law does not apply to audio-only advertisements. It does not apply where AI is used solely for language translation of a human performer. It exempts advertisements and promotional materials for expressive works, including motion pictures, television, streaming content, documentaries, and video games, where the synthetic performer's use in the ad is consistent with its use in the underlying work. The obligation runs to the producer or creator of the ad, not to media outlets that merely publish or disseminate it (the statute expressly preserves 47 U.S.C. § 230 protections). The law does not create a private cause of action. Enforcement is through civil penalties of $1,000 for a first violation and $5,000 for each subsequent one.

For any brand or agency producing advertising in New York that features a digitally created human performer, compliance planning should be underway now.

Side note: The market is already moving, with or without a legal mandate

Regulation is catching up to consumer sentiment, not leading it. The Wall Street Journal reported on April 6, 2026 that brands including Aerie and Le Creuset are running "no AI" pledges as affirmative marketing positions, proactively disclosing AI-free production to get ahead of consumer skepticism. The consumer data behind that trend is striking: according to Gartner, 68% of consumers frequently wonder whether the content they see is real, and 50% would prefer to give their business to brands that avoid generative AI in consumer-facing content; separately, Cint found that 63% of U.S. consumers say brands and creators have a duty to disclose AI use in advertising and marketing. Aerie's CMO cited the brand's long-standing commitment to authenticity; Le Creuset disclosed its AI-free production process pre-emptively to counter the assumption that striking video content must be synthetic. As the WSJ noted, even brands that use AI in operational contexts are drawing a clear line at AI-generated human imagery in consumer-facing content. Disclosure, in other words, has become a brand signal as much as a legal requirement.

What Influencers Specifically Need to Know

The creator-side picture is sharper than many influencers realize, and it operates on two tracks.

Under the FTC's existing endorsement framework, a creator who uses AI to generate imagery, voice-over, or an on-screen performance in a sponsored post faces deception exposure if the result misrepresents a product experience or presents an AI persona as a real human. This is an application of existing law, not a new rule, but it applies with full force today.

Under New York's new law, where a creator produces an advertisement featuring a synthetic performer and has actual knowledge of that fact, the disclosure obligation runs to them as the producer or creator. The statutory carveouts (audio-only, language translation, expressive work promotions) may apply depending on the content, but the default for a branded partnership featuring an AI-generated visual performer is that disclosure is required.

What To Do

  • Audit all active campaigns for AI-generated or AI-altered content. For EU-facing work, compliance with Article 50 is required from August 2, 2026. For U.S. work, apply the FTC's clear-and-conspicuous standard now under existing deception and endorsement law.

  • Separate your California analysis: review talent and production agreements against both AB 1836 (deceased personality rights) and AB 2602 (digital replica contract enforceability for living performers, operative January 1, 2025).

  • Act on New York now: A8887-B takes effect June 9, 2026. Identify every advertisement your team produces or creates that features a synthetic performer and determine whether a statutory carveout applies. Where it doesn't, build your disclosure into production.

  • Brief your influencer partners and agencies: the producer/creator obligation in New York runs to whoever makes the ad, which may include a creator working independently on a brand deal.

  • Embed C2PA provenance metadata in AI-touched assets going forward. Major platforms including Google, Meta, and TikTok have integrated Content Credentials functionality, and provenance signals are increasingly surfaced to users.

The Takeaway

The original thesis holds: disclosure is compliance, and transparency is a competitive advantage. What's changed is the pace. The AI-regulation wave we described in October has now produced concrete obligations in New York, California, and Brussels.

Brands and creators that build transparency into their workflow now are not just managing legal risk. In a market where consumer trust is under sustained pressure, being legible about your process is increasingly a differentiator. As we said in October: in an era of synthetic everything, honesty about process is the new luxury. In some jurisdictions and use cases, it is now also the law.

Brooke Watson is a Partner at Dynamis LLP, a former federal prosecutor, and former Deputy Criminal Chief of the U.S. Attorney's Office for the Southern District of Florida. She advises clients on white-collar defense, government investigations, and complex litigation. She studied at Brown University, Northeastern University School of Law, and Parsons School of Design. To speak with Dynamis LLP about a federal criminal matter, visit dynamisllp.com

This post is intended for informational purposes only and does not constitute legal advice. Readers should consult qualified counsel regarding their specific circumstances.

Next
Next

How Criminal Cases Can Put U.S. Citizenship at Risk: Guilty Pleas, Denaturalization, and Naturalization Fraud