Skip to main content

Algorithm Assurance

The growing use of algorithms

With an increasing number of use cases, there is also a growing need to understand risks arising from the use of algorithms and how management and governing bodies can obtain assurance that these risks are controlled and maintained. Our experts are able to provide this assurance over algorithm controls and to address these issues, leveraging their diverse range of skills in assurance services and algorithm technology.

Algorithms: the need for a robust control framework


While the growth of algorithms is welcomed, they also come with a growing need for assurance over algorithm controls. The use of algorithms is increasingly coming into the public eye, causing regulators and senior management to consider how well the associated risks are being controlled and managed. A robust algorithm control framework is fundamental to algorithm risk management and should cover key areas including governance and oversight, algorithm pre- and post- go-live testing, specific algorithm controls around key risk, monitoring, surveillance and appropriate levels of documentation.

In each of these areas, minimum standards are now being adopted across many markets, which internal and external stakeholders are increasingly looking to attain and benchmark against. The sort of questions companies now need to ask include:

  • Are we sure our algorithms are treating our customers fairly, including under stressed conditions?
  • Are we comfortable our algorithms are not deliberating distorting markets, including under stressed conditions?
  • Are we comfortable our algorithms meet their design objectives, including under stressed conditions?
  • Do we have sufficiently designed controls to manage our algorithm-related risks?

Because this area is so new, regulations and guidelines are still evolving, creating a challenging algorithmic accountability, transparency and compliance landscape. Our team is well positioned to help clients understand ‘what ‘good’ looks like’ in their algorithm control framework.

What are the requirements for trustworthy algorithms?


Providing stakeholders with confidence in the use of algorithms is the cornerstone for the effective use of these technologies. Boards therefore need to consider the following six elements during the implementation and ongoing use of algorithms:

Fairness is a key concern for many operating with self-learning algorithms. In a world where bias is “human”, the risk consists in the algorithms being trained on biased data sets. They will then identify the “trends” and apply them even more forcefully to new datasets resulting in enormous reputational risks. Consider the recent story of the exam algorithm introduced in Britain during the pandemic: With schools in lockdown, the algorithm was used to determine A-level grades based on pupils’ performance earlier in the year. The result was that based on past data, pupils of historically high-performing (mostly) private schools were disproportionately advantaged compared to those from lower-performing (public) schools, causing an uproar in the media and eventually the removal of the algorithm.2

For an algorithm to be considered trustworthy, it is critical that it is performing the task with at least the same level of reliability as the human. Taking the example of a self-driving car, it is absolutely key for the algorithms to work in a predictable and robust manner to prevent accidents. A strong governance framework is essential to define quality standards and testing requirements.

With companies leveraging algorithms to personalize their services up to a previously unknown degree, many stakeholders may start feeling uncomfortable considering the amount and type of data collected. It is therefore key to ensure that data is only being used for the purpose it was intended for and users have the option to decide on whether their data can be stored and used for other purposes. So next time you use your credit card to pay for your bungee jump, it won’t increase the interest rate on your loan.

Both for source data as well as algorithms, (cyber) security is a key consideration to keep in mind. While the review of a blood sample or lung scan by an algorithm may help your doctor draw the right conclusions, no one would want to find this data leaked on the internet and fully accessible to your next employer. With artificially intelligent systems being trained on large data sets, the safeguarding of personal data against external attacks has become a critical topic to address.

Even though many feel like algorithms are replacing humans, humans will rather take on a different role: Whether “in the loop”, “on the loop” or “in command”, every algorithm needs to have a dedicated person accountable to ensuring compliance and processing accuracy. Where algorithms are used to support doctors in the detection of diseases, the governance around the algorithm needs to ensure that the ultimate responsibility for the diagnosis remains with the doctor.

Stakeholders will only trust algorithms for which they understand the data being used and how decisions are made. Where the portfolio of an investment fund is managed by a robo-advisor, investors will challenge the source of the data used to inform decisions and the processing of it before investing their funds. A successful track record will only be one indicator for future returns and investors will expect further explanations on the set-up and controls when assessing their risks.

How Deloitte can help?


Deloitte can assist you when addressing these challenges. Our services cover the topic of “Algorithm Assurance” in a comprehensive way, focusing on the following elements:

  • Governance and internal controls: Readiness assistance and assurance of the governance framework and internal controls relating to the implementation and operation of algorithms  
  • Algorithm review, modelling and data: Assessment of models used in the development, training, testing and deployment of algorithms. Review of documentation to ensure the explainability and transparency of models and decisions taken by the algorithm.
  • Monitoring: Review of monitoring processes to ensure an early identification of biased or incorrect decisions.

With our extensive experience reviewing processes, technology and data models of our clients we are best positioned to provide you and your stakeholders with assurance on the robustness and reliability of the controls around your algorithms. Moreover we can assess your compliance with the criteria for “Trustworthy AI” as developed by Deloitte US or any other available governance framework1 for the ethical use of algorithms.

Algorithm Assurance provides you and your stakeholders with transparency and instills trust in your systems and algorithms.

1 ALTAI - The Assessment List on Trustworthy Artificial Intelligence | Futurium (
2 A-levels and GCSEs: How did the exam algorithm work? - BBC News

Did you find this useful?

Thanks for your feedback

If you would like to help improve further, please complete a 3-minute survey