We know that AI can enhance the efficiency of organisations, transforming operating models and enabling data-driven decision-making.
In times of crisis, day-to-day pressures are exaggerated: timelines are constrained, information is often uncertain, and resources are stretched. A crisis is, by nature, unpredictable. Our current operating environment is defined by disruptive challenges, many of which have the potential to become crises.
In this article we consider how AI may be used to alleviate operational and strategic challenges posed by crises. We make the case for AI not only delivering time-saving efficiencies, but supporting decision-makers and the workforce to prepare for, respond to and recover from disruptive events.
For all the efficiencies and insights AI may offer, which we examine in this article, we cannot move forwards without recognising some of the current obstacles that organisations will face when utilising it in the context of crises.
As with business-as-usual, there are undoubtably many legal and regulatory considerations, reliability issues, and wider ethical questions. Implementation cost remains a significant barrier for many tasks. With no confidence rating for AI generated predictions, uncertainty quantification remains a challenge. Until AI can be trusted to be reliable, accurate and unbiased, all outputs must be reviewed and challenged by humans, potentially reducing efficiency savings.
In addition, AI as a tool for disinformation must be considered in responses. Understanding and verifying facts becomes more challenging with realistic deepfakes. Scenario plans must consider the impacts of misinformation which include but are not limited to operational, reputational and strategic.
Today’s world is volatile and complex, with geopolitical, environmental, and operational risks high on the agenda. The term ‘crisis’ is often associated with a period in which organisations face intense scrutiny and heightened pressure. However, the ability to respond effectively is grounded in preparation. From comprehensive identification of risks to a culture which encourages collaboration, skills and knowledge form a base from which crises are managed. In the current operating environment, organisations must adopt a working assumption that disruptive events will continue to happen - with ever increasing complexity of impacts. Going forward they must also accept that AI will play a critical role in responding effectively.
This section discusses ways in which AI could better prepare organisations for crises. This builds upon current good practice: integrating crisis preparedness with risk and resilience functions, developing plans which are relevant and tested (e.g. through crisis exercises), and equipping teams with the knowledge and skills to respond effectively (e.g. using exercises, team briefings etc.).
Risk identification and threat analysis
Many organisations are already utilising AI to enhance risk identification and monitoring. It has a proven role in collating and analysing external risks, enabling organisations to run real-time dashboards to monitor and report against key indicators. For example, healthcare providers have adopted AI models to identify patients at-risk of sepsis, triaging and highlighting risks across multiple medical indicators1.
Taking this further, AI may help to quantify vulnerabilities at a scale previously unfeasible. For example, regulators could use AI to map interdependencies across wide networks and industries, considering wider systemic risks. AI tools may be used to highlight single-points of failure for critical services and monitor indicators of disruption before it occurs (such as price of materials or increase in downtime). Real-time and historical data may be used to extrapolate trends, predict impacts, and recommend intervention - ultimately delivering cost savings to organisations.
Plan and policy development
An enhanced awareness of risks can be used to develop targeted crisis, continuity and contingency plans: AI playing a role in drafting materials and establishing thresholds based upon data and predictive analysis. In this context, AI will also be able to propose updates to plans in near ‘real time’, for example to reflect learnings in ongoing incidents.
Moving forwards, AI tools may be used to conduct gap analysis across complex organisations, supporting local entities or departments to interpret central policy, understand relevant regulations, and adopt a consistent approach. It will become easier to measure and report on progress, with tangible records and feedback.
In this context, AI can be used to develop robust scenario plans for emerging issues, modelling response options and their potential impact on stakeholders, metrics and risks.
Training and exercising
Once plans are established, they must be embedded across teams. Exercises can provide an effective training tool to raise awareness of plans and identify gaps, using a fictional scenario to stimulate discussion. They provide a safe environment to build individual and collective skills, responding under pressure. Exercises typically take the form of short discursive ‘desktop’ workshops or single/multi-team ‘simulation’ formats- often spanning geographies and businesses.
Scenarios are currently designed to reflect risks in the internal and external environments. In future, AI may be used to enhance and simplify exercise design, drafting multi-faceted escalations and reflecting impacts across complex stakeholder groups.
Integrating AI into simulation exercises will enable teams to play-out the financial, regulatory, human and reputational permeations of decisions, extrapolating based upon real-time data and historical metrics. It will be used to simulate live ‘reactions’ to decisions made by leaders, enabling teams to ‘red team’ decisions.
In red-teaming, AI could be trained to represent key stakeholders, providing dynamic challenge to decisions made by leaders based on previous interactions.
Alongside this, the format of training and crisis exercises will adapt, taking advantage of reduced costs in audio, video and translation. Training will evolve to be more immersive, globally inclusive and ‘gamified’, encouraging take-up. For example, workshops will be enhanced with realistic video bringing to life the impacts of fictional scenarios. AI will be used to create immersive experiences, such as using Virtual Reality headsets to prepare spokespeople for crowded press conferences, or to simulate a newsroom.
Next, we consider how AI can improve crisis response, addressing some of the most frequent challenges for leaders; uncertainty and ambiguity, optimism bias and group think. Recognising the risks associated with AI, with appropriate controls it has the potential to play a key support role through crises.
Situational awareness
Establishing ‘what has happened’; an overarching understanding of a situation and how it is/may evolve, is critical to decision-making. In complex crises, this can be a hugely time-consuming role; gathering information from endless sources, analysing and verifying, and consolidating to understand the situation, risks and questions. Validity and relevance of data are key.
Situational awareness is one of the areas that AI is already being successfully deployed. There are many tools available to, for example, analyse sentiment across large stakeholder groups- enabling real-time tracking of evolving issues. Sentiment analysis tools are able to model and utilise data in a time-efficient way, identifying trends, outliers, and emerging issues.
Further application of AI will enhance real-time dashboards, setting out the situation using verified sources. This will ease consolidation of data across geographies, entities, or governments, providing an up-to-date status of impacts.
To effectively support situational awareness, AI tools must be trained to rate credibility and highlight inconsistencies in data, enabling leaders to scrutinise and question ‘facts’. Ironically, AI will likely play a role in identifying AI-generated misinformation, such as deep-fakes.
Objective setting
Having established the situation, leaders must set appropriate objectives to guide onward decision-making. This is one area, like others (see Human vs Machine) where people must continue to lead. Setting objectives requires leaders to carefully balance organisational values, situational context, and a duty of care towards people and the organisation.
For example, organisations responding to incidents will often cite protection of people as a primary objective. This drives decisions where subtle impacts on emotional and physical health are considered, as well as ethical judgements. In contrast, machine-based logic may focus on tangible measurements of performance – e.g. share price, cost savings, or sales revenue, which could drive negative behaviours (such as comparing financial cost to people impact). Perhaps AI will one day replicate human intuition. But for the moment, people remain critical.
Decision-making
AI-generated scenario modelling, such as ‘digital twins’, will enable leaders to play-through the potential impacts of decisions. This can be used to challenge optimism bias and ‘group-think’ in crises, demonstrating the potential impacts of differing responses. Digital twins allow organisations to replicate physical or virtual environments and observe the impacts of change across a wide range of metrics- including operational, regulatory and reputational. They are rapidly being adopted across a wide range of industries to test response options, such as replicating technology estates to model Net Zero impacts4, predicting impacts on liquidity in stress scenarios5, and considering options in disaster response6.
This innovative application of AI has significant implications that merit further investigation and dedicated research to fully understand and leverage its potential.
Information management
Pressure on resources can be reduced as labour-intensive roles are supported with technology. Process heavy roles, such as compiling situation reports, note taking, or tracking of actions, have the potential to be made more efficient with existing AI tools. However, not all of these roles should be replaced. Over-adoption of AI may have unintended psychological impacts. For example, it is essential for leadership teams to be able to discuss all potential response options to a scenario, without fear of misrepresentation. Experts in organisational behaviour theorise that the presence of AI transcription alters the social fabric of meetings, removing the safety of a known, closed group, and potentially impacting outcomes7.
In addition, later use of AI transcriptions (such as in litigation) must be considered. Out of context, word-for-word minutes of meetings may be easily misinterpreted, or misrepresent the tone of a conversation.
Communication
Perhaps one of the areas where AI has the potential to make the most impact is communication. In crises, communication teams are often under extreme pressure- the demand for updates outstripping the capabilities of a team designed for business-as-usual. Teams are often overwhelmed with requests: required to translate technically challenging information into easy to interpret messages; tailoring materials to multiple stakeholder groups; and working to a schedule dictated by the news cycle and 24/7 social media coverage. Requests often accumulate faster than teams are able to respond.
Many of these time intensive tasks can be delegated to technology. AI can already be used to draft key materials, summarising the situation, developing holding statements, press releases and Q&A documents. With appropriate input (and human oversight), AI can adapt the tone of messages to align with response objectives and even be used to challenge/ critique materials.
Following the development of materials, AI enabled sentiment analysis can be an effective tool to understand the external environment and quantify the impact of a communication strategy.
In addition, there are other areas where AI could support communication in the future. For example, the review process for communication in crises is often fragmented, with statements passing through scores of stakeholders before being approved. In future, AI may be used to support the process, providing rationale/ benchmarking to demonstrate why wording is appropriate, or highlighting legal risks across multiple jurisdictions.
At some point in the crisis response, attention will move towards ‘recovery’. Organisations will step-back, often with external support, and consider ‘what next’. The approach taken to recovery, and use of AI to support, will depend on the nature of the incident. Leaders will revisit the long-term strategic objectives of the organisation and develop a plan to achieve these. In some cases, this may mean restoration of services and a return to the business-as-usual operating model. In others, crisis may form the impetus to re-envision the organisation, building a more resilient model.
Restoring operations
The past few years have demonstrated the unpredictability of crises. Many organisations have navigated unprecedented situations, managing sudden geopolitical shifts, financial volatility, and disruption to supply chains.
Some organisations are able to mitigate the impacts, and the optimal outcome is a return to business-as-usual. For others, fundamental shifts in their operating model mean a pivot to a ‘new future’. In both cases, AI can support development of a long-term strategy, for example applied in scenario planning to test the feasibility of operating in a variety of stressed scenarios or conducting market research and benchmarking.
In other circumstances, crises may stem from sudden, severe incidents, resulting in immediate disruption to operations. In these scenarios, AI may play an operational role in incident response, for example, mitigating the impact of a cyber-attack at a manufacturing site by calculating alternative routes to deliver products, along with the cost and risks associated with the alternative.
Learning lessons
Moving out of the immediate response to crises, leaders seek to understand what, how and why it has happened. The desire to understand the root-cause of a crisis can be led by internal stakeholders, the Board, or at times, in response to wider public inquiries or litigation. Thorough post-incident reviews will seek to discover and identify all relevant artefacts: written evidence of conversations and supporting documents. AI will support this process, working through thousands of documents to identify relevance and highlight critical decisions.
Alongside this, AI tools can be used to gather, analysis and summarise large sets of text: for example responses to post-incident reviews. This can feed-in to post-incident reports, drafting observations and recommendations for review. AI tools may also be used to compare target metrics or regulations with actual performance, providing a fact-based assessment of a crisis.
Any residual issues can be recognised, and, as in the ‘prepare’ phase, monitored against fixed thresholds for signs of escalation.
Following reviews, using AI to make updates to plans and policies can save time, delivering consistent changes across multiple layers of documentation and comparing to relevant regulation.
All of these activities must be delivered with caution. The ‘root cause’ of crises is never straightforward. Incidents typically escalate through a series of events, decisions and responses. Human experience and an in-depth understanding of the triggers, history and wider context of a crisis will continue to be essential in learning from such events.
AI will transform the way that organisations identify risks, build resilience and respond to complex issues or major incidents. Resilient organisations will utilise AI to not only respond during disruption, but to build a strategic and operational advantage. Following crises, AI will support organisations to not just recover, but adapt and improve. This article identified five key opportunities to do this:
If you're interested in exploring how AI can strengthen your organisation's resilience and crisis response, please get in touch.
_____________________________________________________________________________
References
1. https://www.nature.com/articles/s41591-022-01895-z
2. Mesh-AI and National Grid Ventures Leverage AI to Protect Critical Underwater Infrastructure
3. https://mediacentre.britishairways.com/pressrelease/details/22785
4. https://www.lloydsbankinggroup.com/insights/digital-twins.html
5. https://results2021.ref.ac.uk/impact/c07d245a-5230-42a9-bcc8-e51d951077e6?page=1
6. https://www.sciencedirect.com/science/article/pii/S2212420924003911
7. https://hbr.org/2025/04/should-you-record-that-meeting
8. https://blog.google/outreach-initiatives/sustainability/google-ai-wildfire-detection/