Skip to main content

AI can unlock savings in ER&I, but weak cyber oversight can reverse the return. Are you ready to secure AI before it costs you?

In Canada’s energy, resources, and industrial (ER&I) sectors, AI is proving its place. But scaling AI without embedded security puts value, compliance, and leadership at risk. Here’s how cyber security leaders can stay ahead.

From grid optimization and predictive maintenance to procurement and back-office workflows, generative and agentic AI are powering Canada’s ER&I sectors to create efficiencies, prolong asset lifecycles, and improve system resilience.

But as AI adoption accelerates, cyber oversight from the Chief information security officer (CISO) is struggling to keep pace. While agentic AI usage is expected to rise across Canada in the next two years, only 25% of organizations say they have advanced governance in place for autonomous agents.

In ER&I environments where operational resilience and safety are non-negotiable, small governance gaps can quickly compound into disruption, financial loss, or regulatory exposure.

Cyber leaders must ask themselves: Is our cyber function ready to secure AI at the speed (and scale) our own business demands?  

Most teams are scaling AI without scaling security 

AI is entering ER&I environments faster than governance structures and legacy architectures were designed to support. Amid sustained cost pressures and rising regulatory scrutiny, teams across field operations, engineering, procurement, and corporate functions are adopting AI tools outside formal oversight, creating a growing shadow AI footprint across the enterprise.

This surge of shadow AI often bypasses security standards and leaves cyber leaders in the dark. As adoption accelerates, the governance gap widens, bringing operational, financial, and regulatory risks with it. With operational resilience and safety non-negotiable, risk is scaling faster than the controls designed to contain it.

What does this mean for the CISO? Enterprise-level accountability without enterprise-level oversight.

In this environment, alignment between the CISO and CIO/CTO on architecture and governance is no longer enough. Cyber leaders must engage decision-makers early and build stronger, enterprise-wide awareness of cyber, safety, and privacy obligations to ensure AI adoption scales with the right safeguards in place.

No-regret moves to strengthen your defences

Cybersecurity teams are under pressure to keep pace with AI while managing cost, regulation, and complexity. These five no-regret steps will help ER&I leaders close visibility gaps, contain risk, and protect margin without slowing innovation.

Gain visibility into AI usage, especially shadow AI
AI adoption in asset-intensive organizations is rarely centralized. Well-intentioned employees across field operations, IT, engineering, procurement, and corporate teams often adopt AI tools independently to meet cost and efficiency mandates. This makes it critical for CISOs to proactively understand where AI is being used across the organization before it scales beyond oversight.

To stay ahead, CISOs can leverage existing security tools (e.g., data discovery) to identify how AI enters and moves across business workflows and operational environments.

In practice: A Canadian energy company built custom classifiers directly into their data discovery platform to automatically detect where AI models, prompts, and datasets were being used across the business. Rather than waiting for projects to come through traditional approval channels, they built visibility directly into the flow of work. This gave the cyber team a scalable, low-friction way to monitor shadow AI without adding overhead.
 

Build on existing defences
Not all security for AI needs to be new. CISOs can use and optimize existing control points they already rely on to protect the business. By optimizing their current security stack rather than reinventing it, capabilities such as attack surface mapping and network segmentation can be tuned to strengthen their posture against AI-related threats.

In practice: When teams at a Canadian industrial products and construction firm began turning to external AI tools like ChatGPT, Claude, and Gemini, the response wasn't to build new infrastructure or issue a blanket ban. The organization activated existing cloud access security broker capabilities to detect and redirect that behaviour toward its approved internal solution, closing the data exfiltration gap without adding complexity or disrupting the way people worked.
 

Expand governance beyond “cyber”
AI risk in ER&I does not live in one function. It influences business strategy, operational technology, and model risk, meaning governance cannot be owned by cyber teams alone.

CISOs need to work closely with privacy, data & analytics, engineering, field operations, and risk teams to define clear ownership and strengthen governance before issues arise. When these functions align early, accountability becomes clear, creating the conditions required to reduce fragmented controls, limit duplicated spend, and lower the cost of remediating AI risks later.

In practice: A Canadian oil and gas organization established a cross-functional AI governance forum to bring structure to fast-moving AI adoption across the business. Co-led by privacy teams, with strong representation from cyber and technology functions, the forum became the single body for evaluating AI use cases, setting guardrails, and making decisions, keeping innovation moving while ensuring the right controls were in place.
 

Design for the future
No organization has this fully figured out yet. A practical starting point for CISOs is focusing security investments where the business is already experimenting with AI, then scaling outward from there.

In practice, that means revisiting threat models, SDLC processes, access controls, and talent strategies to account for new AI-driven and agentic behaviours. It also means applying AI within cyber operations (i.e., triage, investigation, reporting, etc.) to free constrained capacity and improve response times.
 

Collaborate with focus
The AI security market is crowded, and nearly every vendor now claims AI-enabled protection. In cost-sensitive ER&I environments, CISOs can’t afford to navigate this landscape alone, but they also can’t afford unnecessary complexity.

Rather than spreading investments across many tools and providers, CISOs should form a small number of trusted advisors who can support both IT and operational technology environments help accelerate responsible AI adoption.

In practice: With the AI security market more crowded than ever, a Canadian industrial products and construction organization made a deliberate choice: fewer vendors, deeper collaboration. Working with a trusted set of advisors, they're currently building and testing an AI threat model on a single core platform, using focused effort to generate the kind of risk reduction that broad, fragmented investments don't often deliver.  

Cyber for AI blueprint and architecture framework

AI is embedded in the way FSI moves capital, manages risks, and engages customers. Whether organizations react or lead will depend on the strength of their foundations.

Deloitte’s Cyber for AI blueprint provides a clear, actionable framework to integrate security into AI end-to-end, identifying specific risks and defining the controls required to protect critical systems as adoption accelerates. 

Is your cyber function built to secure AI at operational speed?

Connect with our team to see what Cyber for AI can unlock for your institution. From there, we can assess your current AI controls and apply our security architecture capability framework to strengthen governance, security, and operational resilience. 

Did you find this useful?

Thanks for your feedback