As AI tools are increasingly integrated into advertising workflows, the issue of AI disclosure and content provenance becomes a critical consideration. While many AI applications in advertising applications provide value without deceptive intent—such as language translation, image enhancement for improved quality, or resizing images to meet various advertising specifications—the industry must still address complex issues surrounding transparency and consumer trust.
The U.S. regulatory landscape surrounding content provenance and AI use disclosure is still in its early stages. There is growing concern among policymakers over the need for clear, unambiguous identification of AI-created materials or AI-related consumer interactions, including policy proposals for visible disclosures like watermarking and pop up notices and invisible latent, machine readable disclosures of origin data. Many in the advertising industry believe deceptive, AI-derived advertising materials can be mitigated through the establishment of uniform, technical standards that certify the origin and lineage (of media content. Uniformity and scalability of labeling or notice protocols across the industry and across jurisdictions is of paramount concern for brands and agencies alike.
Federal focus on this issue has lagged beyond a nominal bipartisan interest in establishing conspicuous disclosures in certain political communications that were created or materially altered by AI. At the state level, various initiatives are also being explored to address content provenance and AI disclosures, with some states considering laws that would require clear labeling of AI-generated media in digital advertisements, social media posts, and other forms of public communication. These state-level efforts reflect the growing recognition of the need for transparent AI use, particularly as the technology becomes more integrated into the consumer experience. The ongoing development of regulatory frameworks highlights the challenge of balancing the benefits of AI-driven innovation with the need for accountability and consumer trust. As policymakers continue to navigate this landscape, the question remains as to how best to implement effective AI disclosure laws or content provenance laws that ensure both innovation and consumer protection.
How We Engage on State AI Laws
In the absence of federal action on GenAI content provenance labeling, watermarking, and AI use disclosures issues, U.S. states have begun to introduce their own legislation.
4As Content Provenance Working Group
In late 2024, the 4As convened a group of interested members for a "content provenance roundtable" to explore this issue, potential solutions and challenges. As part of this effort, the 4As released a summary of findings, drawing on those roundtable conversations, meetings with other industry stakeholders and additional research. The path to a viable, scalable, and practical solution remains somewhat unclear, requiring much more industry discourse and debate - and the 4As is here to facilitate and guide; we remain committed to further exploration and collaboration on this issue across the industry.
Communicating the advertising industry's ongoing efforts and solutions regarding AI use disclosure and content provenance to state and federal policymakers is a key advocacy priority for the 4As. This ensures that future regulations align with preferred industry standards and practices and do not impose conflicting requirements.
Contact Jeremy Lockhorn, 4As SVP, Creative Technologies & Innovation for more information about the 4As Content Provenance roundtable and other collaborative efforts to develop industry standards in this area.