“The risk was always there. What’s changed is how fast it can be found.”
For more than a decade, cybersecurity conversations have centred on known issues: unsecured systems, latent vulnerabilities, and accumulated technical debt. These were not hypothetical risks. Boards were briefed and roadmaps were drafted. The underlying belief was that exposure unfolded at a human pace, leaving time to prioritise, plan, and respond.
In the age of AI-led cyberattacks, that belief no longer holds.
Recent developments around the new Mythos AI model mark a shift not only in what risks exist, but in how quickly they surface. Long-standing weaknesses, including custom code, configuration decisions, inherited dependencies, and years of architectural compromise, can now be interrogated continuously and exhaustively by autonomous systems operating at machine speed. The difference is that these systems do not suffer fatigue, blind spots, or trade-offs.
Discovery now outpaces decision-making. Because the assumptions behind today’s vulnerability management practices and cyber operating models were built around human speed, they are increasingly misaligned with the reality organisations face.
While the technical capability is new, so too is the way it is being treated. When Mythos completed training, its creators deliberately constrained access, limiting it to a set of organisations to focus on defensive use. When frontier systems are handled with caution by their own creators, it signals that the balance between discovery, exploitation, and control has shifted.
So far, the emergence of Mythos has been handled with caution, but that restriction is unlikely to be durable. Models trained on similar techniques, data, and objectives will catch up quickly, often without the same safeguards or access controls. OpenAI’s Spud and DeepSeek’s V4 are both rumoured to be near release and may have similar capabilities to Mythos. Within this context of powerful emerging models, protection by policy decisions is temporary at best.
The result is a world where vulnerability discovery moves at machine speed, not human pace.
Some fundamentals remain intact. Vulnerabilities have always existed in enterprise environments, and perfect code has never been the norm.
What has changed is the window between discovery and exploitation, which has collapsed rapidly. First from months to weeks, and now from weeks to hours.
This matters because vulnerability management programmes were designed for a world of scarce findings and human‑paced exploitation. That architecture does not scale when AI models can identify, validate, and contextualise critical flaws continuously, dramatically increasing both volume and velocity.
Why this is different:
The newest AI models can find critical vulnerabilities and ways to exploit them in hours, while most organisations still take weeks to remediate them. The widening gap between identification, remediation, and risk acceptance is where material cyber risk now concentrates.
This gap is rarely driven by lack of insight, but by organisational constraints: testing cycles, approvals, deployment windows, fragile legacy environments, and unclear decision rights. Decision-making and execution have become the limiting factor in risk reduction.
As a result, compliance posture and true risk exposure are beginning to diverge. Frameworks and operating rhythms designed for human‑speed threats struggle in a machine‑speed world, where last year’s “acceptable” response timelines may no longer withstand scrutiny.
The instinctive response to faster discovery is to buy faster tools. But responding to AI‑accelerated vulnerability discovery requires a far broader rethink. This is a people, process, and technology challenge.
As AI scales discovery, the talent premium shifts to making fast, business‑aware decisions at scale. Security teams must be equipped to triage at volume, assess business impact, and communicate priorities clearly to technology and business leaders.
Traditional vulnerability management models break down under continuous, high‑volume discovery. Quarterly cycles and static SLAs lose relevance. Organisations need operating models designed for continuous response, with explicit escalation paths, clear risk criteria, and the ability to reach decisions within 48 hours for critical findings.
In practice, this means shifting:
Layered defences still matter. Frontier AI may accelerate discovery, but defence‑in‑depth (segmentation, boundary hardening, and access controls) continues to prove resilient even against advanced techniques.
The most resilient organisations move fast with purpose, grounded in strong foundations and clear priorities.
Rather than reacting to every signal, leading organisations are redesigning vulnerability and attack surface management programmes strategically.
Key questions include:
These questions shift the conversation from tools to operating model, from speed to velocity.
The most effective approaches to building resilience are pragmatic and phased, with a clear focus on what actually reduces risk. That said, timelines may need to move faster than expected as advanced AI capabilities become more accessible through broader release, rapid replication, or unintended exposure.
In addition to the technical steps outlined in this section, several corollary activities are also prudent:
You may also wish to set aside capital for major remediation in case of a cyber incident.
Start by getting clarity and tightening the basics. The goal in the first month is to understand where you’re most exposed and address issues that can quickly escalate risk.
Put emergency change procedures in place so critical fixes don’t get stuck in process. Take inventory of your software and third‑party ecosystem to precisely identify your most critical assets (or “crown jewels”) and the exposures that matter most to your business. While the Anthropic Glasswing project will result in selected tech companies (part of the fabric of the internet) remediating their own vulnerabilities, other organisations will also have owned, third-party, and open-source software that will need to be fixed. These won’t be the primary focus of Glasswing.
Shore up identity fundamentals by eliminating static credentials and enforcing MFA, and translate regulatory and compliance requirements into a clear view of how they apply to your actual environment.
Within the next 90 days, you will need to proactively scan and remediate your own code and seek some understanding (or even better, assurance) that your other software and solution suppliers are doing the same. You do not need to wait for a broader release of Mythos to incorporate AI-led vulnerability management solutions, as there are other models and tools available to you now.
Make sure you prioritise where and how this will happen within the first 30 days to set your organisation up for success.
Once you have visibility, shift your attention to remediation and scale. This is where organisations can make meaningful dents in exposure if they’re deliberate.
Expect vulnerability discovery volumes to grow dramatically and scale your management programme accordingly. Build software composition visibility for critical systems so you understand what’s running and what’s at risk. Move security earlier into CI/CD pipelines, reassess third‑party exposure through vendor systems, and begin proactively scanning internal code.
With the fundamentals in place, it’s time to reset strategically and invest in capabilities that improve resilience over the long term. Think ahead and aim to create an environment that is air-gapped and can support the infrastructure and data critical to your business. Longer-term resiliency may include extending microsegmentation, as well as determining what constitutes your “minimum viable company” (the essential version of your operations required to function as an organisation).
Augment security operations with AI, extend zero‑trust architectures with strong authentication and authorisation, and introduce continuous red teaming and threat hunting. Refresh incident‑response readiness so teams are prepared for multiple, overlapping events, including materiality and disclosure considerations. Wherever possible, reduce risk from legacy or unsupported systems by limiting blast radius through defence‑in‑depth.
The result of this plan is a clear understanding of your true risk posture, measurable reductions in exposure, and a practical path forward to evolve risk governance across the organisation.
No organisation will eliminate the structural asymmetry between attackers and defenders. But those that act now can materially improve their position.
Risk grows quickly when discovery spills beyond the organisation. Whether it’s accidental exposure, shared research, or a leak, vulnerability information can spread faster than response teams can act. In those moments, risk is defined by how quickly the information spreads and who gets to it first.
Invest in visibility, decision velocity, and disciplined execution for a world where adversaries operate continuously and at machine speed. Human decisions, made faster, closer to the business, and with clearer intent, will determine who stays ahead.
Deloitte supports organisations end-to-end as AI changes both the pace and impact of cyber risk. With decades of experience across complex, regulated enterprises, we help clients strengthen core cyber fundamentals, bring greater visibility and context to their most critical assets and dependencies, and translate technical exposure into clear business decisions. Our focus is on scaling and automating where it matters, prioritising effort effectively, and enabling faster, more confident action to improve organisational resilience.
As AI continues to shift cyber risk to machine speed, we are committed to bringing our clients and communities together to strengthen collaboration across the ecosystem. Together, we can improve resilience and readiness as cyber threats continue to evolve.