The Rise of Synthetic Media in Advertising: Creative Revolution or Brand Risk?
Navigating deepfakes, AI-generated content, and authenticity in the age of infinite content
Synthetic media has moved from experimental to operational faster than most marketing organizations have developed policies to govern its use. The rapid emergence of AI in workflows has led to consumer wariness and even backlash, as seen with Coca-Cola's AI-generated holiday ads.
The creative possibilities are genuinely revolutionary. AI can generate personalized video content at scale, create product imagery for items that don't physically exist yet, and produce multilingual campaigns without the complexity of traditional localization processes. The efficiency gains are substantial—what once required weeks of production can now be accomplished in hours.
But the brand risk landscape is evolving rapidly. Dove's pledge to never use AI in place of real people in advertising, accompanied by their Real Beauty Prompt Playbook, highlights concerns that AI often reinforces societal biases by relying on intentionally curated or broadly scraped datasets.
The technical capabilities have advanced to the point where synthetic content can be virtually indistinguishable from traditional production. High-quality AI-generated imagery, realistic deepfake video, and synthetic voice generation are accessible through consumer-grade tools. This democratization of synthetic media creation means brands need governance frameworks that account for content created outside official channels.
The authenticity paradox is particularly challenging. Consumer research shows that audiences increasingly value authentic, unpolished content over highly produced marketing materials. Yet AI excels at creating polished, professional-looking content while struggling with the imperfections that signal authenticity.
The legal landscape is still evolving. Issues around intellectual property rights for AI-generated content, consent for synthetic representations of real people, and liability for AI-generated misinformation are largely unresolved. Brands using synthetic media need legal frameworks that don't yet fully exist.
The competitive implications are significant. Organizations that develop sophisticated synthetic media capabilities can reduce production costs, increase creative output, and personalize content at unprecedented scale. But they also take on new categories of brand risk and operational complexity.
The governance approach that seems most practical involves clear policies about disclosure, human oversight requirements, and specific use cases where synthetic media is and isn't appropriate. The brands handling this best are those that view synthetic media as a powerful tool that requires careful management rather than either avoiding it entirely or implementing it without guardrails.
Sources: Association of National Advertisers (ANA) AI Ethics Guidelines; eMarketer AI in Marketing Report 2025; Content Authenticity Initiative Research; Adobe State of Content Report; MIT Technology Review AI Ethics Study