Skip to main content

The rise of deepfakes: What digital platforms and technology organizations should know

Driven by recent advances in Generative AI tools, the proliferation of deepfake content on social media platforms has become a recent phenomenon, having grown 550% between 2019 and 2023. This exponential growth has sparked heightened concerns from individuals, organizations, and governments worldwide, with the World Economic Forum calling deepfakes and disinformation as one of the key global risks in 2024. The availability of Generative AI tools creates scaled opportunities for bad actors to create deepfake content and exploit vulnerabilities in digital platforms that lack preparedness for this new type of risk. Digital platforms should adopt a multi-faceted approach to assess and mitigate the risks to their users posed by deepfakes. This includes identifying potential harms to their users and supporting those users accordingly, assessing where existing processes and controls (such as those for login and account verification) may be impacted by deepfakes, and continuously evaluating detection tools and capabilities.

The purpose of this paper is to explore the associated risks due to deepfakes and how technology organizations, including digital platforms, can work to address these risks. It also delves into current regulatory considerations in the United Kingdom, European Union, and United States, and discusses practical prevention and detection mechanisms for digital platforms and organizations that use digital platforms for advertising and other content.

Did you find this useful?

Thanks for your feedback

Recommendations