In today’s technology risk landscape, the convergence of cyber risk and artificial intelligence (AI) presents both unprecedented opportunities and significant challenges for organisations. For board directors, understanding and governing these dynamics is no longer optional – it is a critical imperative. This article explores the current environment of cyber risk and AI, what good cyber practice looks like and actionable steps for board members that can help their organisations remain resilient, innovative and competitive.
The business environment is undergoing a seismic shift, driven by technological acceleration, evolving cyber threats and the transformative potential of AI. According to Deloitte’s Governance of AI: A Critical Imperative for Today’s Boards, AI adoption is accelerating, with organisations increasingly recognising its potential to drive innovation, efficiency and market leadership. The pace of adoption remains a concern, with only 25% of board members expressing satisfaction with their organisation’s progress in integrating AI.
Simultaneously, the cyber threat landscape is becoming more sophisticated. The UK’s National Cyber Security Centre (NCSC) highlights that AI is not only a tool for innovation but also a potential enabler of new cyber threats. The NCSC’s report, The Impact of AI on Cyber Threats: Now and in 2027, underscores the dual role of AI: while it can enhance cyber security defences, it also provides adversaries with advanced tools for launching more targeted and effective attacks.
AI has the potential to revolutionise cyber security by enabling organisations to detect, respond to and recover from cyber incidents more effectively. AI powered systems can analyse vast amounts of data in real-time to identify anomalies, predict potential threats and automate responses. The same technology can be weaponised by malicious actors to develop sophisticated phishing attacks, exploit vulnerabilities and evade detection. The NCSC says that AI will “almost certainly pose cyber resilience challenges to 2027 and beyond across critical systems and economy and society” and recent Deloitte research shows that in a recent annual report, 49% of the FTSE 100 identified an emerging risk relating to AI. Most of these emerging risks were linked to cyber security.
Cyber is a complex topic that requires a combination of technology solutions, robust processes and a strong awareness culture. When exercising oversight responsibilities, boards should be aware of the importance of being able to detect, respond and recover from incidents as well as having preventative controls in place to minimise the likelihood and scale of attacks. So what does this look like when done well?
|
|
|
|
|---|---|---|
|
Senior accountability across the organisation and appropriate governance - it is not just an IT issue |
A defined cyber resilience strategy, aligned to the business strategy, that covers the breadth of cyber from protection to response and recovery – all underpinned by a clear risk appetite |
Clear metrics and reporting with appropriate targets set and measured for both transformation activity and business as usual |
|
Clear understanding of critical assets and associated impacts such as confidentiality, availability and integrity |
A strong focus on the human element and cyber culture; training and awareness and security by design |
Third party security and resilience; understand your critical suppliers from both a cyber perspective and business supply chain |
|
Cyber hygiene - relentless focus on the foundations such as patching and privileged management control |
Continuous improvement - the technology landscape and threat actors are always evolving with the advent of AI and future quantum computing |
Code generation – ensure sufficient rigour over the use of AI in producing code as using AI can present a new attack surface for cyber attacks |
Some areas in which boards can consider how best to adopt a proactive and comprehensive approach to the oversight of cyber risk and AI in their organisations.
|
Key area |
Observation |
Key questions |
|---|---|---|
|
Elevating AI and cyber risk on the agenda |
According to Deloitte’s survey, 31% of boards still do not have AI on their agenda, and 66% of board members report that their boards have insufficient knowledge of AI. Effective governance in the digital age requires board members to be technologically fluent and cyber-aware. Consider introducing AI and cyber risk as standing agenda items, ensuring regular and informed discussions. |
|
|
Governance of AI adoption and integration |
AI adoption should align clearly with strategic objectives and ethical standards. Deloitte’s Governance of AI report highlights the importance of boards understanding AI’s impact, developing a comprehensive AI strategy and defining the organisation’s risk appetite. |
|
|
Embedding controls around the adoption of Agentic AI |
On average, market estimates suggest that the autonomous AI agent market could reach US$8.5 billion by 2026 and US$35 billion by 2030. Deloitte predicts that if enterprises orchestrate agents better and thoughtfully address the associated challenges and risks, this market projection could increase by 15% to 30% - or as high as US$45 billion by 2030. |
|
|
Strengthening cyber resilience |
Cyber security is no longer just an IT issue; it is a strategic business imperative. A practical approach to cyber resilience encompasses protection, detection, response, and recovery. |
|