Don’t panic about AI videos — plan
These convincing videos are improving fast. Here’s how to respond.
AI video is evolving fast, but comms pros still win with authenticity, oversight, and smart brand protection against deepfakes, according to Center for AI Strategy Advisor and Hunter Chief Digital & Social Officer Michael Lamp.
Q: With OpenAI launching Sora 2 and Meta rolling out Vibes—both essentially AI video feeds—we’re seeing major platforms bet that users will embrace AI-generated content as entertainment. How should communications and agencies leaders rethink their content strategy? Does this mean that platforms themselves will now prioritize synthetic content?
Michael Lamp: It’s premature to assume that synthetic video is going to top human-generated content. Right now, these two examples and others like them represent innovation within the AI space more than a trend indicating a shift in user preference. Further, these are still being deployed as separate feeds or products within platforms and not truly integrated into the feeds users prioritize. If that changes, it will be a much stronger signal that brands and agencies need to figure out a solution to synthetic video. Whether that solution is to do more of it or to rally against, is a different question.
Q: Some note that users increasingly evaluate content based on whether it’s fun to watch rather than whether it’s real. When communications and agency teams are competing against infinite AI-generated videos, what’s the strategic value proposition for investing in fact-checked, human-produced content—and how do you make the business case for it?
ML: The value is still clear because we still see human-created content driving the most virality in the feeds that are most important to people and platforms. When we do analysis of paid campaigns, we often see UGC topping studio creative, and that’s a great signal that users on social will always value content from other real users.
The efficiency question is a more challenging and dynamic one, and the best way to make the case is to show the results – that there’s a better way to drive impact when you’re managing at least a majority share of human-focused content and not overhauling your entire content strategy to focus on synthetic video. It’s an important innovation in the video space, but that doesn’t mean there’s a reason to go from 0 to 60 and override the incredible work Creators are doing every day.
Q: Given that we now have multiple platforms making it easy to create convincing synthetic videos, what authentication and verification systems should companies be implementing today to protect their brand from deepfakes and ensure their legitimate content can be distinguished from AI-generated imposters?
ML: Internally, they should be building committees and workstreams to define the way they want their brand to be used and referenced in the AI era. They should then ensure there are appropriate protocols and sophisticated platforms in place that allow them to monitor for unlawful, unethical or otherwise non-approved use of that brand’s identity. Brands and agencies also have an obligation to defend not only IP of businesses but the privacy of individuals. We should be building and maintaining internal processes to deter deep fakes, but we should also be banding together and supporting watchdog organizations that push the industry to regulate and build better checks and balances.

