Skip to main content

Establish futureproof AI governance across your organisation

A practical approach to trustworthy AI use

This article explores the three key steps an organisation can take to set up futureproof AI governance:

Define AI in your organisational context and determine the strategic direction of travel based on the type of AI use cases you expect to roll out.

Develop a trustworthy AI governance framework to manage your AI risks effectively and work cross-functionally because AI risks vary widely and need a multi-disciplinary approach.

Leverage existing processes around data management, privacy and related disciplines to define a lean and effective governance structure for trustworthy AI.

Whenever new technologies emerge rapidly and related regulations are drawn up, organisations too must adapt quickly though also with a longer-term view. To deploy AI in a trustworthy and sustainable way, business leaders must manage the broad spectrum of risks in a pragmatic manner. This was acknowledged in our recently published report1 where only 23% of respondents rated their organisations as highly prepared in the area of risk and governance for AI. Comprehensive understanding of these multifaceted risks must be developed. In addition, informed decisions must be taken on how to manage the potential impact of AI on individuals, individual groups , the organisation as a whole, and broader society.

 

Trustworthy AI: trustworthy AI systems are ethical, robust, and compliant.

 

Ethical means the system respects ethical principles and values and prioritises user privacy and data protection, ensuring that personal information is handled with the utmost respect for individuals’ rights.

Robust means the system is reliable , safe and secure from malicious attacks. Therefore, it must haveundergone rigorous testing and validation to identify and manage potential vulnerabilities.

Compliant means the system respects regulations and corporate policies . For instance, the data it uses for training must follow a well-defined data governance framework to ensure the datasets are sufficiently complete and free of bias.

From a risk perspective the new EU AI Act only amplifies the need for a strong risk management system to ensure trustworthy AI by fulfilling the regulatory requirements. A focused and practical approach to addressing these requirements and mitigating AI risks relies on effective governance which can be defined and operationalized through the three steps described below. Establishing a governance framework, in particular, for the use of Gen AI, is the number one priority of organisations that took part in the recent Deloitte survey1: 51% of these organisations are currently focusing on actively managing the risks associated with Gen AI implementation.

EU AI Act : The recently adopted EU AI Act has introduced strict regulatory requirements that organisations must adhere to. Its primary objective is to ensure safe and trustworthy use of AI, with individual rules for specific use cases and sectors, and without hindering innovation.

As a first step towards establishing effective AI governance, you need to clearly scope and define what AI means for your company. As stated in the EU AI Act, an AI system refers to a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

This definition is broad but needs to be well understood so that no system is overlooked within your organisation. The organisation should also identify and differentiate between AI systems that have been in existence for an extended period and are already subject to comprehensive risk management and new or forthcoming AI solutions that will require thorough risk treatment.

Building an effective AI risk management system means mitigating the risks in order to safeguard the interests of the three key parties that may be affected: individuals or individual groups, your organisation, and the broader society.

To achieve this, you need to start by identifying the specific AI use cases you plan to introduce over the next 1 to 2 years as part of your overall AI strategy. For instance, are you thinking of building an AI solution for predictive maintenance, working with machine logs or a customer service chatbot that is trained on customers’ personal data? Do you intend to use AI only as a productivity tool to increase the efficiency of your employees or do you have a pipeline of AI-based business projects? As part of this analysis, it is also important to differentiate between use cases based on developing AI solutions internally and use cases relying on third party AI models. Your overall AI strategy, the specific AI use cases, and the underlying data used will determine the nature and magnitude of the risks involved.

In our recent survey1, 55% of organisations reported avoiding certain GenAI use cases because of data-related issues. Top data-related concerns include using sensitive data in models and managing data privacy and security.

Guide the risk identification by leveraging your trustworthy AI framework which outlines the key principles of trustworthiness and therefore the key risk areas to consider. You can use as a framework, for instance, our Deloitte Trustworthy AI Framework, which is based on the key principles of the AI Act and the NIST AI Risk Management Framework.

While all dimensions of the framework are crucial for successful AI risk management, for each specific use case some dimensions, i.e. certain risk areas, are less critical than others.

Looking at the example above – predictive maintenance of machinery in a factory – the risks related to the robustness and security of the model must be prioritised.

In the second example –the customer service AI chatbot – your risks would relate more to privacy, transparency and explainability as customers’ personal data is involved. Each trustworthiness dimension will always have a different weight depending on the use case . It is therefore very important to determine the AI use case concerned and the related risk profile.

Once the most relevant risk areas for the use case are defined, you can apply the appropriate controls needed to manage them.

A multi-disciplinary approach is needed to manage AI successfully because managing AI risks will require a contribution from different departments within the organisation. While most departments may manage their risks independently, siloes must be avoided so that the newly implemented AI systems function properly and can be maintained effectively. Organisations need a multidisciplinary approach to getting ready for regulatory changes. The primary preparatory actions typically involve the general counsel team for formal regulatory monitoring and the business lines and corporate strategy for regulatory forecasting and assessments1.

AI governance can be achieved in an efficient way by drawing on existing practices and knowledge. Worries regarding regulatory compliance represent the primary obstacles to the successful development and deployment of Gen AI tools and applications, as indicated by 36% of Deloitte’s survey respondents4. However, regulatory compliance in the field of AI is not entirely separate from existing regulations. Previous regulatory compliance processes, such as those aligned to the GDPR or the Federal Act on Data Protection, can help make it possible to define your AI trustworthiness framework and the governance needed quickly.

Conclusion

AI is fast-moving and establishing futureproof AI governance is crucial for sustained success. Embracing proactive risk management and ethical, compliant practices will not only enable regulatory compliance but also foster trust among stakeholders, paving the way for trustworthy AI practices. The future of AI governance lies in creating sufficiently broad awareness within the organisation while building a dedicated, flexible and continuously monitored governance framework. This will ultimately put the organisation on a sustainable and successful path in the rapidly evolving AI landscape.

If you enjoyed reading this article and want to learn more about successfully implementing AI in your organisation, please see our next article in which we will delve deeper into AI model risks .

 

References

1 Deloitte’s State of Generative AI in the Enterprise

Did you find this useful?

Thanks for your feedback

If you would like to help improve Deloitte.com further, please complete a 3-minute survey