Skip to main content

The technology risk landscape – considerations for boards

In today’s technology risk landscape, the convergence of cyber risk and artificial intelligence (AI) presents both unprecedented opportunities and significant challenges for organisations. For board directors, understanding and governing these dynamics is no longer optional – it is a critical imperative. This article explores the current environment of cyber risk and AI, what good cyber practice looks like and actionable steps for board members that can help their organisations remain resilient, innovative and competitive.

Cyber risk, artificial intelligence and the board

The business environment is undergoing a seismic shift, driven by technological acceleration, evolving cyber threats and the transformative potential of AI. According to Deloitte’s Governance of AI: A Critical Imperative for Today’s Boards, AI adoption is accelerating, with organisations increasingly recognising its potential to drive innovation, efficiency and market leadership. The pace of adoption remains a concern, with only 25% of board members  expressing satisfaction with their organisation’s progress in integrating AI.

Simultaneously, the cyber threat landscape is becoming more sophisticated. The UK’s National Cyber Security Centre (NCSC) highlights that AI is not only a tool for innovation but also a potential enabler of new cyber threats. The NCSC’s report, The Impact of AI on Cyber Threats: Now and in 2027, underscores the dual role of AI: while it can enhance cyber security defences, it also provides adversaries with advanced tools for launching more targeted and effective attacks.

AI has the potential to revolutionise cyber security by enabling organisations to detect, respond to and recover from cyber incidents more effectively. AI powered systems can analyse vast amounts of data in real-time to identify anomalies, predict potential threats and automate responses. The same technology can be weaponised by malicious actors to develop sophisticated phishing attacks, exploit vulnerabilities and evade detection. The NCSC says that AI will “almost certainly pose cyber resilience challenges to 2027 and beyond across critical systems and economy and society” and recent Deloitte research shows that in a recent annual report, 49% of the FTSE 100 identified an emerging risk relating to AI. Most of these emerging risks were linked to cyber security.
 

What does good cyber look like?

Cyber is a complex topic that requires a combination of technology solutions, robust processes and a strong awareness culture. When exercising oversight responsibilities, boards should be aware of the importance of being able to detect, respond and recover from incidents as well as having preventative controls in place to minimise the likelihood and scale of attacks. So what does this look like when done well?

Senior accountability across the organisation and appropriate governance - it is not just an IT issue

A defined cyber resilience strategy, aligned to the business strategy, that covers the breadth of cyber from protection to response and recovery – all underpinned by a clear risk appetite

Clear metrics and reporting with appropriate targets set and measured for both transformation activity and business as usual

Clear understanding of critical assets and associated impacts such as confidentiality, availability and integrity

A strong focus on the human element and cyber culture; training and awareness and security by design

Third party security and resilience; understand your critical suppliers from both a cyber perspective and business supply chain

Cyber hygiene - relentless focus on the foundations such as patching and privileged management control

Continuous improvement - the technology landscape and threat actors are always evolving with the advent of AI and future quantum computing

Code generation – ensure sufficient rigour over the use of AI in producing code as using AI can present a new attack surface for cyber attacks

A proactive approach 

Some areas in which boards can consider how best to adopt a proactive and comprehensive approach to the oversight of cyber risk and AI in their organisations.

Key area

Observation

Key questions

Elevating AI and cyber risk on the agenda

According to Deloitte’s survey, 31% of boards still do not have AI on their agenda, and 66% of board members report that their boards have insufficient knowledge of AI. Effective governance in the digital age requires board members to be technologically fluent and cyber-aware. Consider introducing AI and cyber risk as standing agenda items, ensuring regular and informed discussions.

  • How is AI impacting or likely to impact our organisation, directly or indirectly?
  • What is our organisation’s cyber maturity and how are we measuring the effectiveness of our cyber programme?
  • Is there enough foundational AI and cyber security education provided for our board members and do we have appropriate specialists involved to provide insights on emerging technologies and threats?
  • Should we consider enhancing our board composition to include more directors with expertise in AI and cyber security?

Governance of AI adoption and integration

AI adoption should align clearly with strategic objectives and ethical standards. Deloitte’s Governance of AI report highlights the importance of boards understanding AI’s impact, developing a comprehensive AI strategy and defining the organisation’s risk appetite.

  • Does management have a clear strategy for AI adoption and integration?
  • How do we evaluate and mitigate risks associated with AI, including bias, fairness, and security?
  • Do we understand our regulatory and contractual obligations and track our compliance with these?

Embedding controls around the adoption of Agentic AI

On average, market estimates suggest that the autonomous AI agent market could reach US$8.5 billion by 2026 and US$35 billion by 2030. Deloitte predicts that if enterprises orchestrate agents better and thoughtfully address the associated challenges and risks, this market projection could increase by 15% to 30% - or as high as US$45 billion by 2030.

  • Has our organisation considered the possibilities of Agentic AI for how we do things internally, how we interface with customers and other third parties, potential new or enhanced products or services?
  • Have we assessed whether we need new guardrails, controls and monitoring in place over any agents we establish?
  • How is management keeping up to date with how cyber controls and technologies to manage the risks around Agentic AI are developing?
  • Does management have a position on how agentic identities (e.g. credentials and access rights) will be managed and how to ensure they don’t become targets for cyber criminals?

Strengthening cyber resilience

Cyber security is no longer just an IT issue; it is a strategic business imperative. A practical approach to cyber resilience encompasses protection, detection, response, and recovery.

  • Do we have a clear cyber resilience strategy aligned with business objectives and risk appetite?
  • Do we understand our critical assets and associated risks, including supply chain vulnerabilities?• Do we have a strong cyber culture and how do we measure and support it?
  • Are we investing appropriately in cyber security—are we comfortable with our understanding of the risk and our current exposure and the plans in place to address this?
  • How robustly has our organisation prepared for breaches? Do our preparations include incident response plans covering each of technical, PR, and organisational strategies?

Click here to read the full article and the other content in On the board agenda 2026.

Did you find this useful?

Thanks for your feedback