Skip to main content

The power of Gen AI is real. So are the vulnerabilities

Topic: CxO & Board

Gen AI promises speed, scale and transformation. But without the right risk strategy in place, you could be accelerating straight into uncertainty. To lead with confidence and security, executives across the c-suite need to understand the full picture from the start.

It is hard to ignore the hype. Gen AI has moved from being a fun experiment among curious employees to a serious investment and priority in boardrooms. Across industries, organisations are feeling the urgency to act.

And yes, the potential is real. But so are the risks if they are not properly understood and addressed.

In Deloitte’s recent article "Managing Gen AI Risks", four categories of risk are explored. These are more than just concerns for the cybersecurity function as they touch upon every part of the leadership team and every corner of the organisation. The four categories are:

  • Enterprise risks: These include threats to data privacy, intellectual property and the uncontrolled use of Gen AI tools by employees.

  • Model risks: These cover issues like hallucinations, data poisoning and prompt injection attacks.

  • Adversarial risks: These relate to how malicious actors can use Gen AI to scale phishing, impersonation or malware.

  • Market risks: These involve regulatory uncertainty, infrastructure constraints and dependency on single vendors.

Gen AI offers significant value when it is implemented with clear intention and a solid foundation. For a closer look at the key risk categories and what leaders can do about them, you can explore the full Deloitte article or read on, as I highlight some of the most pressing risks, including data breaches and cybersecurity threats.

Your model is only as good as your data and architecture
If you have ever asked ChatGPT (or any other kind of LLM) a question, and it responded with something that sounded confident but was completely incorrect, you have seen the issue first-hand. This is known as a hallucination. When a Gen AI model cannot find the right answer, it may create one instead. And it will not tell you it is wrong.

While this might seem harmless in casual use, the risk becomes far more serious inside an organisation. At scale, hallucinations can lead to misinformation, poor decisions and real consequences for compliance, operations and reputation.

This happens because Gen AI models do not produce facts. They generate outputs based on probability and training patterns. What sounds plausible might still be entirely inaccurate.

That is why a strong data foundation is critical. Organisations need reliable data flows, clear governance and relevant business context in every Gen AI use case. This includes:

  • Putting data provenance in place, using tools like model cards or digital passports to track source and training data.

  • Managing data privacy and consent through opt-in and opt-out mechanisms, especially where personal or sensitive data is involved.

  • Verifying the output using content credentials, which can help confirm authenticity and prevent misinformation.

  • Using retrieval-augmented generation systems with curated, trusted sources rather than generic or uncontrolled datasets.

Without these safeguards, even the most powerful models can deliver misleading or flawed results.

Gen AI changes the nature of cybersecurity
With Gen AI, the threat landscape is shifting in ways many organisations are still coming to terms with. Traditional cybersecurity strategies are being challenged by entirely new types of risk that emerge from how these models are built, used and accessed.

Prompt injection and evasion attacks now allow malicious actors to manipulate prompts, bypass safeguards or trigger unintended model behaviour. At the same time, data poisoning is becoming harder to detect, especially as more organisations adopt retrieval-augmented generation. Corrupted inputs can degrade model quality, introduce bias or embed hidden backdoors.

Gen AI also lowers the bar for deepfake and phishing attacks. Threat actors can now automate realistic, personalised messages that are difficult to distinguish from real ones, making impersonation far more convincing.

These risks are not theoretical, as Deloitte research shows that nearly a third of the organisations in the research are already concerned about Gen AI being used in phishing, malware and data loss scenarios. And the pace of these threats is only increasing.

Gen AI systems must be considered part of the enterprise attack surface, and protected accordingly. This means:

  • Implementing input validation and prompt sanitisation to ensure malicious instructions cannot reach the model.

  • Using AI-specific firewalls that monitor model behaviour and block suspicious activity.

  • Applying least-privilege access to model APIs and data pipelines.

  • Incorporating adversarial training to help models recognise and respond to potential manipulation.

  • Ensuring human oversight remains part of every critical workflow where AI plays a decision-support role.

Two steps leaders can take now
To turn Gen AI into real business value, leaders need practical steps that connect innovation with control. Here are two actions leadership teams can take now to scale Gen AI safely and responsibly.

  1. Connect cyber and business resilience
    To mitigate all the cyber security risks surrounding Gen AI , technical and operational leaders need to align from the start. This means building solutions with security, privacy and governance embedded at every stage: from data sourcing and model development to deployment and monitoring. Leading organisations are already integrating Gen AI into their broader cybersecurity architecture and investing in AI-specific defences such as model firewalls, input validation and access controls.

  2. Keep humans involved
    Even the most advanced Gen AI models are not infallible. That’s why the C-suite must establish clear governance structures, review processes and escalation paths to determine when and how human intervention is required. At the same time, employee training plays a key role in managing risk. Organisations need to upskill teams not only to use Gen AI tools effectively, but also to understand their limitations and potential impact. Digital responsibility and behavioural change should be embedded into every adoption effort from the outset.

Building resilience with Gen AI begins with collective leadership
Gen AI is changing how organisations create value, deliver services and make decisions. This is a business-wide shift that affects strategy, operations and risk.

Responsibility cannot sit with IT alone. If the whole organisation is committed to investing in Gen AI, then the whole leadership team must also take ownership of how it is deployed securely, responsibly and in support of long-term growth. From the Head of People using Gen AI in workforce tools to the CFO exploring automation in finance: this is a shared responsibility that cuts across the entire C-suite.

Now is the time to lead with focus and direction. Gen AI has real potential, but without the right guardrails, it can introduce real risk. It is a powerful tool.