AI has for some time played an important role in how digital platforms respond to their regulatory requirements, with its use in online content moderation being an obvious example. Ofcom’s recent report on its strategic approach to AI 2025/26 noted that “Online platforms use automated content moderation to identify harmful content at scale and with greater speed, helping improve safety for their users”. Ofcom also observed that GenAI could be used to generate synthetic content to plug gaps in the training data used to develop content moderation tools.
The growing use of automated moderation systems is a trend evident in transparency reports produced by services under the Digital Services Act (DSA). In addition, the European Commission’s DSA Transparency Database states that 49% of decisions taken to remove content in the last six months were fully automated. Given ongoing advancements in AI and the opportunity for cost efficiencies, this use is expected to grow.
Civil society organisations and external experts have also long played a crucial role in the online safety ecosystem, shaping both policy and public discourse. According to UNESCO1, civil society refers to the collective of non-governmental organisations, community groups and associations that operate independently of the state and the market. It is seen as fostering civic engagement, promoting social justice, advocating for public interests and ultimately strengthening democracy and enhancing community resilience.
New UK and EU regulations are increasingly formalising this role. In the EU, this shift is part of a deliberate, high-level policy, as underscored by the European Commission’s recent ‘Strategy for Civil Society’, which highlights the vital role such organisations play in "democratic checks and balances, helping monitor policy and decision-making and fostering transparency and accountability."
In the context of digital regulation, relevant civil society organisations may include:
In the current geopolitical climate, certain civil society groups have faced scrutiny and criticism, given the debate about the relationship between content moderation and freedom of speech. In this context, earlier this year the European Board for Digital Services affirmed that "independent civil society organisations… play an essential role in the effective application of the DSA in the EU" and that it "rejects any measures that undermine that essential role of civil society and trusted flaggers."
The growing importance of civil society is also evident in the evolving UK and EU regulatory landscape, something that in-scope platforms will need to balance against their increasing use of AI to respond to and mitigate risk.
Figure 1 sets out a number of regulatory activities that we see as being particularly relevant in this regard, which we elaborate on further in the remainder of this article.
As Figure 1 illustrates, a new regulatory toolkit is formalising civil society's role across a number of areas. This includes helping to address the dissemination of illegal content, the identification and study of systemic risks as well as combatting disinformation.
The dissemination of illegal content
Expert bodies already play a crucial role in reporting specific content instances to platforms. This manifests in voluntary initiatives run by digital platforms to collaborate with expert partners on content moderation and policy, as well as formalised regulatory requirements.
Under the DSA, Digital Services Coordinators (Member State authorities that play a role in monitoring and enforcing obligations) can formally appoint organisations as trusted flaggers. Platforms are then required to prioritise notices of illegal content provided by these flaggers. In the UK, the focus in Ofcom’s Code of Conduct is more narrowly on fraudulent content, with the list of recommended flaggers including public bodies such as law enforcement and the Financial Conduct Authority.
At the time of writing2, 71 trusted flaggers have been designated under the DSA across 21 of the 27 EU Member States, as set out in Figure 2.
Despite this established framework, analysis of the DSA Transparency Database, which captures content moderation decisions taken by online platforms, reveals a regime that is still maturing, with several key trends evident:
Given these findings, it is no surprise that the role of trusted flaggers is an area of future regulatory focus. The European Commission is planning new guidelines to streamline the appointment of trusted flaggers, which could substantially increase both the number of designated flaggers and the volume of notices that platforms must process. Following a recent update, the timeline for these guidelines has been revised: a public consultation is now expected in Q2 2026, with final adoption anticipated before the end of 2026.
The identification and study of systemic risks
Beyond flagging individual pieces of illegal content, both the Online Safety Act (OSA) and DSA now include formal mechanisms allowing external experts to identify systemic issues.
In the UK, Ofcom’s super-complaints regime, effective since January 2026, allows eligible expert bodies to formally raise concerns about systemic risks of harm across services (or on a single service in severe cases). These complaints can trigger regulatory action, including information gathering and enforcement, and carry reputational risks given Ofcom’s commitment to transparency.
Similarly, the EU’s new vetted researcher regime, in force since October 2025, provides a mechanism for approved researchers to access non-public platform data to investigate systemic risks and their mitigation. Platforms must adhere to specific processes and timescales, outlined in a Delegated Act, when responding to these requests.
While the UK currently lacks an equivalent, the Data (Use and Access) Act has amended the OSA, granting the Government the power to introduce regulation to set up such a regime. Whilst the Government has not yet proposed next steps on this topic, Ofcom has produced a report setting out “three potential policy options that can facilitate greater researcher access to information about online safety matters, which the UK Government may consider as part of the design of any future access framework”. This includes considering an option largely along the same lines as the DSA regime, to the extent that Ofcom suggests it would require platforms already subject to DSA obligations to make only “minimal changes”.
Combatting disinformation
Finally, combatting disinformation is a critical driver, with its importance magnified by the geopolitical trends discussed earlier. This is clearly demonstrated by the European Democracy Shield proposals, which signal a firm intention to formalise the role of expert groups in safeguarding European democracy.
The European Media Freedom Act also demonstrates this dynamic. While the Act is designed to protect democratically important content, the Commission has acknowledged its potential for exploitation. The primary risk is that malicious actors, such as state-backed disinformation outlets, could register as Media Service Providers (MSPs). By doing so, they could exploit the protections afforded to MSPs, such as a mandatory 24-hour takedown delay, allowing manipulative content to circulate widely before it can be removed. To counter this vulnerability, recent guidance encourages VLOPs to establish "dedicated channels" and "feedback mechanisms", empowering civil society to identify and report organisations that are abusing these protections.
Beyond the Media Freedom Act, the role of civil society is also being embedded into proactive governance frameworks under the DSA. For instance, such organisations are now routinely included in Digital Services Coordinator-led roundtable discussions ahead of major elections. Similarly, the DSA Code of Practice on Disinformation establishes a 'Rapid Response System', which enables civil society groups to flag time-sensitive, election-related disinformation. This system has been deployed during recent elections in Hungary and Bulgaria, demonstrating a particular role for civil society and fact-checking organisations during critical democratic events.
Civil society's increasingly formalised role introduces new regulatory requirements that companies will need to navigate. This should complement the increasing emphasis on the use of AI tools as part of risk management processes. Though each regime has distinct obligations, they all empower civil society organisations and external experts, meaning many common strategic implications. Affected services should consider a holistic response, rather than addressing each requirement in isolation.
This section explores these common implications through three key areas for online services to focus on: Robust Governance and Documentation, Responsive Operations and Strategic Engagement.
This remains an evolving area, but as should be clear from the above, it is one that regulatory authorities across Europe will be expected to continue to prioritise. Those platforms that can combine a continued and enhanced use of AI, along with efficient and effective processes for responding to external requests from appropriate civil society bodies, stand to benefit most.
References:
1. UNESCO, the United Nations Educational, Scientific and Cultural Organization, is a specialised agency dedicated to strengthening international cooperation in the fields of education, science, culture and information.
2. All references in this article are correct as of 21 April 2026.