Sasquatch selfie adventures. Dating tips from fairy tale princesses. Fast-breaking news that may or may not be real. Creative and sometimes concerning uses of generative video are populating social media feeds and competing for eyes in the attention economy. Reality, it seems, may be facing even greater competition than it has already.

Generative video could empower independent creators to produce more for less while reinforcing social media platforms’ ability to deliver compelling short-form entertainment and gain a greater share of digital advertising. The same capabilities could also overwhelm audiences, erode authenticity, and provoke regulators to try to contain the potential negative side effects of generative video.

When anyone can produce realistic video and publish it to potentially millions as “news,” branded content, fan fiction, and much more, or use it to scam, coerce, or deliberately misinform people, the potential for misuse could strengthen the drumbeat of regulators seeking to contain new media.

Deloitte predicts that, in 2026, generative video could provoke a regulatory response in the United States, potentially driving more age verification in more states, refreshing federal challenges to Section 230 protections established in 1996 by the Communications Decency Act,1 and requiring labeling for AI content published on social platforms. Such regulatory efforts have already begun in some US states, like New York, Tennessee, and Utah.2 The US Supreme Court has declined to hear objections to an age-verification law for social media use in Mississippi.3 The European Union’s Digital Services Act also includes provisions for “effective age assurance methods.”4 In 2026, a US election year, social platforms may be compelled to use their AI and data capabilities to better manage generative content. Some platforms are already advancing these solutions.5

Generative AI will likely enable a glut of video content while also powering better moderation of this content, all at scale. Regulators may look to see how effective these efforts are at managing perceived online harm. Platforms will likely do the same, while also monitoring for reduced engagement, lower monetization, and challenges in meeting compliance.

Some independent content creators are being empowered by generative tools

Generative video models can create short clips of high-quality video and audio that are nearly indistinguishable from “real” content.6 The relative ease of use and cost-effectiveness is empowering some creatives to run with their creativity, try things with much lower risk of failure, and even rapidly test creative ideas in the hyper-competitive marketplace of social video.

Though they may not be able to deliver 30-minute TV shows or full two-hour movies, generative video tools are very capable of producing compelling, made-for-social video content—like high-profile ads bringing internet memes to life.7 In fact, the perceived limitations of generative video that may slow Hollywood adoption seem to be empowering some creators and social video platforms, where short-form, fast cuts, and selfies are common to virality, and where audiences may be less discerning about the free, open-source entertainment they receive.

For independent creators—and maybe soon for all media production—generative tools may be less about replacing the entire production stack to render fully synthetic content, and more about eliminating costly micro-tasks, compressing the time to create, and empowering smaller outfits to do more.

Generative AI and video tools are powering cheaper and faster content creation, eliminating more of the micro-tasks in production, distribution, and measurement.8 This can amplify outputs to help keep up with a fast cadence of publishing, often necessary to engage followers and stand out in the algorithmic feeds of social platforms.

Many tools focus on time- and money-saving shortcuts, like quickly generating videos from scripts and “one-click” clip generation.9 This can help enable creators to rapidly test variants to determine which approaches work better with specific audiences and trending algorithms. Other tools are enabling creators to generate AI avatars of themselves that can reduce fatigue while still engaging audiences and even enabling greater personalization at scale.10 The same features are expanding into generative ads.11

Generative AI tools can also support non-generative content with faster editing, like removing “ums,” silences, and bad takes, fixing shaky cameras, and automating the removal of dead space.12 Multilingual dubbing tools can open access to foreign language audiences, expanding engagement and ad revenue potential.13

With these capabilities, creator studios can more quickly ideate, generate content, target audiences, measure results, and repeat. This could not only disrupt the economics of content but also lead to exponentially more content. A greater supply of content could create more competitive pressure among creators, which could inspire even more creative content.

Generative video could threaten Hollywood and social media platforms alike

As Deloitte showed in its “2025 Media and Entertainment Outlook,” some major studios and publishers have been exploring generative video but have been hesitant to integrate it into productions. This caution may come, in part, from a fear of undermining their premium content offerings with synthetic media, but also due to challenges from talent. The SAG/AFTRA strike of 2023 included demands for limiting the use of generative AI in productions.14 Yet, Hollywood studios are often overburdened by high production costs and may wish for generative AI to eventually reduce this burden.15

At the same time, traditional studios and streamers face greater competition for advertising dollars.16 While some Hollywood studios work to stem ad losses from a declining linear TV business and migrate their advertising businesses to connected TVs and streaming video services, many social platforms have been taking more digital advertising dollars. Advertiser spending on these platforms is showing significantly greater growth than other digital media, like streaming video services.17

Generative AI’s ability to both quickly generate content and predict which segments and individuals will engage with it appears to be transforming digital advertising.18 With simple prompts, social platforms can automatically generate thousands of ads with small variations, and then instantly test which variants perform the best.19This is enabling ad buyers to spend less on creating ads and more on testing variants that return the highest success rates, reinforcing the competitive advantage of social platforms.20

Yet, social platforms will also likely see greater risks from their own generative video efforts. They may confront a further boom in the amount of video content they should deliver and manage, some of it likely treading into copyright violations or worse categories. They may risk audiences being overwhelmed by “AI slop” and a rapid devaluation of the principal currency that has fed much of social media’s growth: authenticity.

A year ago, we asked people in the United States how they felt about generative media.21 Sixty-four percent agreed that generative AI on social media is dangerous; 76% agree that online content creators should be transparent about when and where they use generative AI in their content; and 53% agreed that online content creators who use generative AI are not authentic.

Now, a year later, the capabilities of generative video and the amount of it on platforms have both grown considerably. Generative video appears to be quickly approaching parity with reality and could soon tread into dangerous territory, potentially to both attract regulators and empower bad actors. There are concerns of potential fraud as models enable bad actors to use AI to impersonate people.22 Along with sasquatch selfie videos come the likely influence campaigns, scams, political disinformation, and conspiratorial rantings. This could even impact legal proceedings if video evidence becomes untrustworthy. Yet, without regulatory oversight, audience exodus, or punitive damages, platforms are typically unincentivized to rein it in.

The bottom line: Social platforms can protect the truth—and their best interests

Synthetic media, AI slop, and disrupted business models could pale in comparison to the societal challenges when anyone can make and distribute realistic videos, and video evidence is no longer a reliable form of truth. Watching the leading edge of generative videos—especially the ones trying to illustrate the risks of fake news feeds, celebrity sightings, false flags, and political gaffes—it’s hard to downplay what could become a wave of disruption that seems to be fast approaching.

To get ahead of these risks, social platforms should work to develop and integrate watermarking, AI labeling, and ways to track and reveal the provenance of all content, including ads, that are uploaded to or generated by their services. Seemingly inevitable political manipulation and consumer deception should move regulators to work with platforms and establish stronger guardrails for generative content, such as requiring labeling and watermarking.

In the United States, Section 230 of the Communications Decency Act, which has protected open platforms from liability for content they host, could be challenged if they don’t get ahead of these accelerating risks.23 European regulators have already shown a strong willingness to regulate US social platforms and data collectors.24 Developing stronger compliance automation, like compliance agents that monitor the outputs of other generative tools, could enable platforms to rapidly respond to violations at scale.

For all its connectivity, transparency, and celebration of humanity, social media has also advanced the fragmentation of information, the deregulation of media, and the capabilities of bad actors. Without strong efforts from the social platforms enabling these capabilities, generative video could greatly amplify this condition and further unmoor society from any shared sense of ground truth.

BY

Chris Arkenberg

United States

Tim Bottke

Germany

Endnotes

  1. U.S. Congress, Senate, “S. 314 – A bill to protect the public from the misuse of the telecommunications network and telecommunications devices and facilities,” accessed Oct. 22, 2025.

  2. Mayer Brown LLP, “Children’s online privacy: recent actions by the states and the FTC,” Feb. 25, 2025.

  3. Ella Lee, “Supreme Court — Mississippi social-media law and minors’ access,” The Hill, Aug. 14, 2025.

  4. European Commission, “Commission press corner detail: IP/25/1820,” press release, July 14, 2025.

  5. James Beser, “Extending our built-in protections to more teens on YouTube,” YouTube News & Events Blog, July 29, 2025.

  6. The New York Times, “AI video deepfakes – quiz and playground,” June 29, 2025.

  7. Bill Chappell, “AI video ad, Kalshi advertising NBA finals,” NPR, June 23, 2025.

  8. Thomas H. Davenport and Nitin Mittal, “How generative AI is changing creative work,” Harvard Business Review, Nov. 14, 2022.

  9. Torin Anderson and Shuo Niu, “Making AI-enhanced videos: Analyzing generative AI use cases in YouTube content creation,” In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, pp. 1–7. 2025. 

  10. Collectively Inc., “How content creators are embracing generative AI and AI avatars: insights from our latest survey,” Jan. 14, 2025.

  11. Jess Weatherbed, “TikTok ads may soon contain AI-generated avatars of your favorite creators,” The Verge, June 17, 2024

  12. Anderson and Niu, “Making AI-enhanced videos: Analyzing generative AI use cases in YouTube content creation.”

  13. Yael Malamatinas, “7 of the best AI dubbing tools to translate videos into different languages,” Vimeo, blog, April 28, 2025.

  14. Screen Actors Guild – American Federation of Television & Radio Artists, “SAG-AFTRA statement on the use of artificial intelligence and digital doubles in media and entertainment,” March 17, 2023.

  15. Katie Kilkenny, “Higher costs are hitting film and TV producers even as studios keep trimming budgets,” The Hollywood Reporter, April 17, 2025.

  16. Chris Arkenberg, Jeff Loucks, Kevin Westcott, Danny Ledger, and Doug Van Dyke, “2025 media and entertainment outlook,” Deloitte Insights, April 23, 2025.

  17. Interactive Advertising Bureau, “Digital ad revenue surges 15% YoY in 2024, climbing to $259 B,” April 17, 2025.

  18. Ryan Browne, “AI is disrupting the advertising business in a big way — industry leaders explain how,” CNBC, June 15, 2025.

  19. Charles James, “Generative AI for retail ad campaign variants and A/B testing automation,” ResearchGate, Nov. 9, 2024.

  20. Interactive Advertising Bureau, “Nearly 90% of advertisers will use Gen AI to build video ads, according to IAB’s 2025 video ad spend & strategy full report,” July 15, 2025.

  21. China Widener, Jana Arbanas, Doug Van Dyke, Chris Arkenberg, Bree Matheson, and Brooke Auxier, “2025 digital media trends: Social platforms are becoming a dominant force in media and entertainment,Deloitte Insights, March 25, 2025.

  22. Clare Duffy, “OpenAI’s Sam Altman warns of an AI ‘fraud crisis’,” CNN, July 22, 2025.

  23. Paris Martineau, “Exclusive: Section 230 may finally get changed — lawmakers prep new bill,” The Information, accessed Oct. 22, 2025.

  24. Dawn Carla Nunziato, “The Digital Services Act and the Brussels Effect on platform content moderation,” Chicago Journal of International Law 24, no. 1 (2024): pp. 1–37.

Acknowledgments

The authors would like to thank Rohan Gupta and Jana Arbanas for their contributions to this article.

Cover image by: Jaime Austin; Adobe Stock

Copyright