AI does not lie - it doesn’t know how to. But it makes mistakes far too often, unintentionally generating false or misleading information. These so-called hallucinations can be difficult for non-experts to spot and may mislead. For businesses using AI, that opens up risks that need addressing. This is especially critical in M&A due diligence, where understanding and mitigating AI-related risks can significantly impact deal valuation and post-transaction success.
Many organisations are not adequately assessing the risks that accompany AI adoption. According to The 2025 New Generation of Risk Report from RiskConnect, 60% of companies are considering adopting agentic AI but over half of those have yet to undertake any form of risk assessment. This reveals a gaping hole in risk management practices. Two things are needed. Frameworks that evaluate potential AI-related risks comprehensively. And a deep understanding of just what is needed to make AI solutions operate reliably and effectively.
AI hallucinations are a significant risk in the deployment of AI technologies. They occur when AI systems generate outputs that are not based on real-world data or factual information, leading to incorrect predictions, misleading insights, or entirely fabricated data. The root causes of AI hallucinations often lie in biases within training data, algorithmic errors, misinformation in data sources, or limitations inherent in AI models. For instance, a 2024 medical study found that when large language models were asked to generate references for systematic reviews, GPT-4 hallucinated 28.6% of citations. In the legal domain, Standford researchers reported in 2024 that general-purpose chatbots showed hallucination rates of 58-88% on legal questions. Taken together, these results show that hallucinations are neither rare nor trivial, underscoring the need for vigilant oversight and robust validation mechanisms.
AI hallucination risks are significant. Misleading information can lead to poor strategic decisions, financial losses, and reputational damage. In high-stakes sectors such as finance and healthcare, where precision and accuracy are paramount, the consequences of AI hallucinations can be particularly severe. A difference of only 0.5% could, in certain situations, amount to millions of dollars.
These risks are further compounded in Agentic AI models, where autonomous agents interact and act upon one another’s outputs. When unchecked, hallucinations can propagate through interconnected systems, creating a chain of compounded errors in which small inaccuracies at each step accumulate into large-scale distortion of business processes and decisions.
As organisations worldwide deploy AI at speed, the competitive implications posed by hallucinations are immense. Companies that establish strong validation, monitoring, and governance mechanisms will secure a decisive advantage. Those that neglect hallucination dangers may face costly setbacks and struggle to recover.
Ultimately, these challenges highlight the need for rigorous AI due diligence to verify data integrity, assess the reliability of the model, and ensure robust oversight frameworks capable of identifying and mitigating hallucinations before they erode trust in AI systems or cause systemic harm.
For M&A specialists, understanding hallucinations and conducting thorough due diligence on AI systems is essential to verify the rationale for the transaction, especially when AI components are central to the deal. Their impact on the valuation, and the expected post-deal benefits, must be carefully assessed.
Due diligence is a vital component of the M&A process, encompassing a comprehensive evaluation of a business's assets, including its AI systems. This meticulous review is essential for verifying that AI systems, governance and processes are reliable, accurate, and free from significant biases or errors.
Key areas of focus during the due diligence process include:
AI systems are pivotal for a company's valuation. M&A specialists must meticulously evaluate AI systems to ensure they align with strategic objectives and contribute positively to the overall valuation.
The integration of AI systems into existing operations post-acquisition is also a critical consideration. Ensuring seamless transitions and continued functionality is vital to maintaining operational efficiency and realising the full potential of the acquisition. By addressing these aspects, M&A specialists can enhance the strategic value of AI systems and drive successful outcomes in the M&A process.
In a world where AI is becoming integral to business operations, the ability to navigate its complexities has become a key differentiator for successful M&A strategies. Identifying and correcting flaws early is essential to avoid compounded errors, a loss of competitive advantage, or costly post-deal remediation. Robust AI due diligence enables M&A specialists to verify AI reliability, mitigate systemic risks, and preserve the long-term value of the transaction.
We are grateful to Joseph Lee for his valuable inputs to this report.