Skip to main content

Responding to external scrutiny from civil society in the age of AI

How digital platforms can address evolving civil society obligations in European online safety and media regulation

At a glance

  • A wave of new EU and UK rules require digital platforms to give greater weight to formal requests to take action from civil society organisations such as consumer rights groups, academics and specialist fact-checkers.
  • This is primarily driven by increasing European political and regulatory scrutiny on how platforms remove illegal content, manage systemic risks and combat disinformation.
  • This targeted emphasis on external, human-led scrutiny may create tension with the ongoing trend of digital platforms integrating AI into their internal risk management processes, itself often driven by cost and scalability considerations.
  • To navigate this developing landscape and respond effectively to the new requirements, platforms should reassess their internal governance, operational responsiveness and strategic approach to external engagement.
  • Those that successfully balance the efficiency of AI with robust mechanisms for responding to expert civil society input will be best placed to build scalable, compliant and trusted operations.

The shifting landscape and geopolitical context

AI has for some time played an important role in how digital platforms respond to their regulatory requirements, with its use in online content moderation being an obvious example. Ofcom’s recent report on its strategic approach to AI 2025/26 noted that “Online platforms use automated content moderation to identify harmful content at scale and with greater speed, helping improve safety for their users”. Ofcom also observed that GenAI could be used to generate synthetic content to plug gaps in the training data used to develop content moderation tools.

The growing use of automated moderation systems is a trend evident in transparency reports produced by services under the Digital Services Act (DSA). In addition, the European Commission’s DSA Transparency Database states that 49% of decisions taken to remove content in the last six months were fully automated. Given ongoing advancements in AI and the opportunity for cost efficiencies, this use is expected to grow.

Civil society organisations and external experts have also long played a crucial role in the online safety ecosystem, shaping both policy and public discourse. According to UNESCO1, civil society refers to the collective of non-governmental organisations, community groups and associations that operate independently of the state and the market. It is seen as fostering civic engagement, promoting social justice, advocating for public interests and ultimately strengthening democracy and enhancing community resilience.

New UK and EU regulations are increasingly formalising this role. In the EU, this shift is part of a deliberate, high-level policy, as underscored by the European Commission’s recent ‘Strategy for Civil Society’, which highlights the vital role such organisations play in "democratic checks and balances, helping monitor policy and decision-making and fostering transparency and accountability."

In the context of digital regulation, relevant civil society organisations may include:

  • Consumer rights groups, responsible for identifying and reporting online fraud and scams.
  • Child protection charities, advocating for safer online environments for minors and reporting harmful content.
  • Digital rights organisations, campaigning on issues including privacy, data protection and freedom of expression.
  • Specialist fact-checking organisations and academic researchers, countering false narratives and analysing foreign influence operations.

In the current geopolitical climate, certain civil society groups have faced scrutiny and criticism, given the debate about the relationship between content moderation and freedom of speech. In this context, earlier this year the European Board for Digital Services affirmed that "independent civil society organisations… play an essential role in the effective application of the DSA in the EU" and that it "rejects any measures that undermine that essential role of civil society and trusted flaggers."

The growing importance of civil society is also evident in the evolving UK and EU regulatory landscape, something that in-scope platforms will need to balance against their increasing use of AI to respond to and mitigate risk.

Figure 1 sets out a number of regulatory activities that we see as being particularly relevant in this regard, which we elaborate on further in the remainder of this article.

Skip to description

Figure 1: Key elements of the emerging civil society regulatory approach

Understanding the new requirements

As Figure 1 illustrates, a new regulatory toolkit is formalising civil society's role across a number of areas. This includes helping to address the dissemination of illegal content, the identification and study of systemic risks as well as combatting disinformation.

The dissemination of illegal content

Expert bodies already play a crucial role in reporting specific content instances to platforms. This manifests in voluntary initiatives run by digital platforms to collaborate with expert partners on content moderation and policy, as well as formalised regulatory requirements.

Under the DSA, Digital Services Coordinators (Member State authorities that play a role in monitoring and enforcing obligations) can formally appoint organisations as trusted flaggers. Platforms are then required to prioritise notices of illegal content provided by these flaggers. In the UK, the focus in Ofcom’s Code of Conduct is more narrowly on fraudulent content, with the list of recommended flaggers including public bodies such as law enforcement and the Financial Conduct Authority.

At the time of writing2, 71 trusted flaggers have been designated under the DSA across 21 of the 27 EU Member States, as set out in Figure 2.

Skip to description

Figure 2: Trusted flaggers across Europe (hover over the country to see the exact numbers)

Despite this established framework, analysis of the DSA Transparency Database, which captures content moderation decisions taken by online platforms, reveals a regime that is still maturing, with several key trends evident:

  • Trusted flagger activity is not yet picking up on the largest platforms: A surprising trend in the data is that Very Large Online Platforms (VLOPs) and Search Engines (VLOSEs), despite their large user base, account for only 2% of moderation decisions taken following a trusted flagger notice. Instead, the majority of trusted flagger-initiated decisions occur on smaller platforms.
  • Trusted flaggers have a higher action rate: Notices from trusted flaggers lead to content removal (as opposed to restrictions) more often than other notices, suggesting their reports are either more accurate or carry greater weight.
  • There is fragmented trusted flagger activity across the EU: There are material differences in the number of trusted flagger-initiated content moderation decisions across EU Member States, with more activity in France for example. This is perhaps no coincidence, given the French regulatory authority, Arcom, is the vice-chair of the European Board for Digital Services' trusted flagger working group. As this group works to establish and share regulatory best practices across the EU, we may expect to see an increase in trusted flagger activity in other Member States.
  • Trusted flaggers raise a greater variety of issues: The majority of trusted flagger-initiated content moderation decisions relate to illicit text-based content and also illegal or non-compliant products. This contrasts with content moderation decisions taken by the platforms themselves, which relate (in the main) to illegal or non-compliant products.

Given these findings, it is no surprise that the role of trusted flaggers is an area of future regulatory focus. The European Commission is planning new guidelines to streamline the appointment of trusted flaggers, which could substantially increase both the number of designated flaggers and the volume of notices that platforms must process. Following a recent update, the timeline for these guidelines has been revised: a public consultation is now expected in Q2 2026, with final adoption anticipated before the end of 2026.

The identification and study of systemic risks

Beyond flagging individual pieces of illegal content, both the Online Safety Act (OSA) and DSA now include formal mechanisms allowing external experts to identify systemic issues.

In the UK, Ofcom’s super-complaints regime, effective since January 2026, allows eligible expert bodies to formally raise concerns about systemic risks of harm across services (or on a single service in severe cases). These complaints can trigger regulatory action, including information gathering and enforcement, and carry reputational risks given Ofcom’s commitment to transparency.

Similarly, the EU’s new vetted researcher regime, in force since October 2025, provides a mechanism for approved researchers to access non-public platform data to investigate systemic risks and their mitigation. Platforms must adhere to specific processes and timescales, outlined in a Delegated Act, when responding to these requests.

While the UK currently lacks an equivalent, the Data (Use and Access) Act has amended the OSA, granting the Government the power to introduce regulation to set up such a regime. Whilst the Government has not yet proposed next steps on this topic, Ofcom has produced a report setting out “three potential policy options that can facilitate greater researcher access to information about online safety matters, which the UK Government may consider as part of the design of any future access framework”. This includes considering an option largely along the same lines as the DSA regime, to the extent that Ofcom suggests it would require platforms already subject to DSA obligations to make only “minimal changes”.

Combatting disinformation

Finally, combatting disinformation is a critical driver, with its importance magnified by the geopolitical trends discussed earlier. This is clearly demonstrated by the European Democracy Shield proposals, which signal a firm intention to formalise the role of expert groups in safeguarding European democracy.

The European Media Freedom Act also demonstrates this dynamic. While the Act is designed to protect democratically important content, the Commission has acknowledged its potential for exploitation. The primary risk is that malicious actors, such as state-backed disinformation outlets, could register as Media Service Providers (MSPs). By doing so, they could exploit the protections afforded to MSPs, such as a mandatory 24-hour takedown delay, allowing manipulative content to circulate widely before it can be removed. To counter this vulnerability, recent guidance encourages VLOPs to establish "dedicated channels" and "feedback mechanisms", empowering civil society to identify and report organisations that are abusing these protections.

Beyond the Media Freedom Act, the role of civil society is also being embedded into proactive governance frameworks under the DSA. For instance, such organisations are now routinely included in Digital Services Coordinator-led roundtable discussions ahead of major elections. Similarly, the DSA Code of Practice on Disinformation establishes a 'Rapid Response System', which enables civil society groups to flag time-sensitive, election-related disinformation. This system has been deployed during recent elections in Hungary and Bulgaria, demonstrating a particular role for civil society and fact-checking organisations during critical democratic events.

How online platforms can respond

Civil society's increasingly formalised role introduces new regulatory requirements that companies will need to navigate. This should complement the increasing emphasis on the use of AI tools as part of risk management processes. Though each regime has distinct obligations, they all empower civil society organisations and external experts, meaning many common strategic implications. Affected services should consider a holistic response, rather than addressing each requirement in isolation.

This section explores these common implications through three key areas for online services to focus on: Robust Governance and Documentation, Responsive Operations and Strategic Engagement.

Skip to description

Figure 3: Overview of key actions

Platforms should first ensure their internal governance structures are ready to handle formal submissions from civil society, in addition to internal use of AI tools. The expected regulatory scrutiny demands clear, accountable and well-documented processes.

  • Establish Clear Ownership and Escalation Paths: Formalised internal processes, clear lines of responsibility and defined escalation paths will be critical to effectively manage all civil society inputs. This covers trusted flagger notices, vetted researcher requests and regulatory inquiries following a super-complaint. This will require clearly defined roles across Legal, Policy, Trust & Safety and Communications teams to ensure requests are actioned in line with regulatory requirements.
  • Maintain Records and Evidence Justifying Decisions: Maintaining comprehensive records of safety measures, risk assessments and content moderation decisions is important. This evidence justifies decisions and demonstrates compliance with well-reasoned policies. Beyond responding to regulatory inquiries (e.g. from Ofcom following a super-complaint), this will also aid effective transparency reporting, including incoming UK requirements.

Beyond governance, platforms should ensure their operational processes are ready for the increased volume and complexity of these new formal interactions. For many, this means reviewing, adapting and scaling existing functions rather than creating new ones.

  • Design Compliant, User-Friendly Interfaces: Ongoing European Commission investigations have highlighted concerns about user-friendly and accessible 'Notice and Action' mechanisms, specifically potential 'dark patterns' including unnecessary friction. Across all operations, including processes for trusted flaggers, vetted researchers and under the Media Freedom Act, interfaces should be designed to eliminate undue burdens and ensure ease of use.
  • Assess Resource and Technology Scalability: Given the potential increase in requests, companies should evaluate whether their current systems can handle the demand. This involves ensuring that prioritisation processes are fit for purpose and that human and technological resources are sufficient. Companies should proactively stress-test existing tools, particularly for high-stakes events like elections, and consider how to integrate new channels (e.g. feedback mechanisms under the Media Freedom Act) with existing intake systems to avoid duplication and inefficiency. This stress-testing, similar to requirements in financial services, enables platforms to identify and address potential bottlenecks in a controlled environment before they impact compliance and public safety.
  • Define Prioritisation Principles and Metrics: To manage the varied quality and actionability of external submissions, companies should establish clear prioritisation principles and metrics. A one-size-fits-all approach is inappropriate when experience shows that even formal notices from trusted flaggers can lack required detail or reasoning. Instead, these principles should underpin a tiered review system designed to fast-track high-quality, actionable notices while identifying submissions that require further assessment.
  • Adapt Data Access and Reporting Processes: The DSA's vetted researcher provisions will impose additional burdens on data governance and technical teams. Data infrastructure will need to be adapted to enable secure, compliant extraction of non-public data, ensuring sufficient granularity and providing supporting documentation (e.g. codebooks). An effective approach here will also benefit transparency reporting.

The transparency inherent in these new mechanisms presents both potential reputational risks and opportunities. Companies should consider how they engage with civil society and manage their public narrative.

  • Stakeholder Mapping and Dialogue: Companies should prepare for increased engagement with a wider range of stakeholders, including civil society organisations, researchers and Member State Digital Services Coordinators. Proactively mapping this landscape and engaging early may help understand priorities and establish dialogue channels before issues escalate, mitigating risk (for example, potential super-complaints in the UK). Similarly, establishing constructive dialogue may help improve the quality of formal trusted flagger notices by providing feedback on what makes them operationally actionable.
  • Prepare for Increased Public Scrutiny: The public nature of these mechanisms, including Ofcom’s commitment to publish super-complaint responses and requirements that vetted researchers in the EU make their findings public, means services should anticipate media interest and public scrutiny. In-scope companies should consider developing proactive communication and legal strategies to contextualise findings, highlighting their safety efforts in the process, in order to contribute positively to public debate.
  • Leverage Engagement for Strategic Intelligence: There is a growing expectation for online services to translate insights from civil society and academics into improved policies and safety outcomes. Platforms should identify opportunities arising from such engagement. For example, facilitating researcher access can provide valuable, independent intelligence on emerging harms. Transparent collaboration with civil society, as encouraged by the Media Freedom Act, offers a chance to remove bad actors while demonstrating a commitment to combatting disinformation, building public trust.

Conclusion

This remains an evolving area, but as should be clear from the above, it is one that regulatory authorities across Europe will be expected to continue to prioritise. Those platforms that can combine a continued and enhanced use of AI, along with efficient and effective processes for responding to external requests from appropriate civil society bodies, stand to benefit most.

References:

1. UNESCO, the United Nations Educational, Scientific and Cultural Organization, is a specialised agency dedicated to strengthening international cooperation in the fields of education, science, culture and information.
2. All references in this article are correct as of 21 April 2026.