Skip to main content

What you need to know about NIST's AI Risk Management Framework published in January 2023

What is NIST?

National Institute of Standards and Technology (NIST) was founded in 1901 and is now a part of the U.S. Department of Commerce. NIST aspires to be a world leader in creating critical measurement solutions and promoting equitable standards.

What is the focus of the published AI Risk Management Framework (RMF)?

To improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.

Why do I need an AI risk management capability?

Not all AI use cases are created equally when it comes to risk. Proportional governance is key to ensure appropriate AI risk management. It is important to identify and address the risks that are most critical to the organisation without hindering innovation with overly restrictive controls.

Why should I consider reading the NIST AI RMF?

NIST is a well-recognised organisation with a history of publishing strong risk management frameworks in key domains of interest, such as in cyber. The frameworks can be implemented as stand-alone, or in conjunction with other frameworks, such as those published by ISO.

The AI RMF is free to use and readily provides practical guidance on capabilities that an organisation should have in place to ensure it can innovate with confidence and manage AI risks.

How does the framework relate to the AI principles outlined in OECD and the proposed EU AI Act?

The key principles for trustworthy AI are shared across the three. Alongside the AI RFM, NIST published an illustrative comparison of how the key areas of risk map across these publications. See Table 1 as an excerpt from this comparison.

Note however, that the NIST AI RMF provides practical guidance on risk management activities for each of the principles outlined. Where principles are similar, the intent and use of the referenced frameworks are different, where this NIST publication has a specific focus on risk management activities associated with each principle.

The comparison of principles with other prominent frameworks is published here.

Table 1: Mapping of key risk areas

Table columns one, two, and three are referenced from NIST Crosswalk publication.

What are the key constructs of the NIST AI RMF?

The AI RMF Core provides outcomes and actions that enable dialogue, understanding, and activities to manage AI risks and responsibly develop AI systems. The Core is composed of four functions, and each of these functions is underpinned by four to six categories with sub-capabilities in each category. The four functions are:

  • Govern: Culture of risk management is cultivated and present.

  • Map: Use case context is recognised and risks related to context are identified.

  • Measure: Identified risks are assessed, analysed, or tracked.

  • Manage: Risks are prioritised and acted upon based on projected impact.

Image 1 contains an excerpt from the RMF playbook of the four categories in the 'Measure' function. Details are shown under the first category.

Image 1: Measure categories with sub-capabilities noted for measure 1

Is there anything else that I should note in the NIST AI RMF publication?

Appendix B has examples of ‘How AI Risks Differ from Traditional Software Risks.’ Understanding the difference in risk associated with AI compared to traditional software is fundamental to effective and efficient management of IT risks in the organisation.

There are three particular examples of where AI risks diverge from other IT risks that resonate with observations from our client engagements:

  • Difficulty in scoping and performing regular AI-based software testing because AI systems are not subject to the same controls as traditional code development;

  • AI systems may require more frequent monitoring and automated triggers to conduct corrective maintenance due to data, model, or concept drift; and

  • Under0developed software testing standards in the AI software development domain.

Conclusion

When it comes to the increased use of AI in an organisation, risk management needs to be targeted where it matters most. By taking a proportionate approach, organisations can manage the risks associated with AI without burdening the organisation with overly restrictive controls. Effective AI governance is key to giving business leaders the confidence to innovate.

Implementing an AI risk management framework at an enterprise scale requires a wide range of skills to drive change across the three lines of defence. We have supported organisations to address AI risks, including developing custom guidance documents for AI developers, updating existing risk and governance processes, and reviewing operating models to ensure they are fit for purpose in governing AI.

Please reach out to us if you would like to learn more about how we support organisations to build their AI risk management capabilities.