Skip to main content

Redefining Third Party Management with Gen AI

The first part of this two-part series (intended for Boards and executive leadership) showcases the significant opportunities Gen AI presents around managing third-party relationships, including actionable insights and use cases. As well as considering evolving technology trends, this article also discusses the new risks that Gen AI can create for organizations.

Case Study: To start, let’s see what Lucy did on her way to work
 

Lucy’s journey to work is interrupted by an alert on her mobile. There is abnormal congestion at a supplier’s logistics center that is likely to delay a critical shipment, resulting in the failure to meet their commitment to their top customer. Despite the weak signal on the train, her friendly chatbot is available online and is immediately able to offer a selection of re-routing choices together with the associated costs, carbon emissions and related compliance requirements. Given the recent spate of disruptions in that specific area, Lucy’s CEO had already asked her to locate alternative suppliers following discussions at a recent Board meeting. By using another chatbot to scout around for alternative suppliers, she already made significant progress on this. Its latest features go a step further by generating reports to help with the due diligence of potential suppliers, based on a deeper dive into related data. This Gen AI-based system also offers further potential to track and trace critical subcontractors to be able to include them in future alerts, where relevant.

Organizations that continue to push the frontiers of their enterprise increasingly benefit from the skills, knowledge and expertise of their third-party ecosystem partners. However, this does often mean that the number and criticality of third parties they work with increases, and so does the complexity of the risks they create. As a result, the existing mechanisms to manage these relationships quickly get out-of-date. Smarter use of AI technologies could support in continually realigning third-party management to evolving business strategies and operating models, efficiently and cost-effectively amid volatility and uncertainty.

Gen AI opportunities can now go a step further in enhancing user experience by interacting with third-party managers in their spoken language instead of using software commands or machine language. This allows AI to potentially act as an assistant, a more knowledgeable tutor, or even a “co-pilot” in the future for those managing third party relationships (as illustrated in Lucy’s example). Seamless integration with other decision-making content such as spreadsheet-based analyses, charts, and diagrams can also be auto generated, helping users manage not just routes or logistics but other complex issues such as concentration risk or multi-jurisdictional regulatory compliance.

Our Third-Party Risk Management (TPRM) Survey 2023 demonstrates that the most resilient organizations manage third parties using evolving technologies (including Gen AI). This makes them more agile in navigating the complexity, velocity, and ripple effect of newer and interconnected risks using diverse data sources on a more real-time basis. This kind of comprehensive approach, with interconnected risks, real-time monitoring in place and well sighted stakeholders, enables them to react quicker to impacts of adverse events.

Over the past few years, organizations have attempted to leverage the diverse tools and technologies used for third-party management to establish an intelligent third-party risk management and monitoring capability, augmented by real-time information. The role of such a coordinated platform is to provide a single, up-to-date picture, rather than inconsistent and outdated information from multiple (and often) static sources. The longer-term strategic aspiration has been to reduce the reliance on the traditional third-party techniques in the toolkit, such as third-party questionnaires, and move towards credible and actionable intelligence provided in real time to manage third parties effectively and in proportion with an organization’s risk exposure.

This is precisely where Gen AI fits in to make this strategic aspiration a reality. Organizations have experimented with classification, prediction, summarization, machine learning (ML) and intelligent process automation over the last few years. As mentioned, Gen AI’s ability to interpret and create content, including charts and diagrams, while “interacting like another (human) colleague”, is expected to create many opportunities through:

  • an enhanced user experience making third-party management tasks more efficient and effective;
  • collectively making notable enterprise-level impact in terms of cost savings; and 
  • transformational changes to a business’ strategic model or at the value-chain level (see the case studies in this insight).
More than just process automation 
 

Our experience shows that while organizations have primarily used AI for process automation of third-party management activities, there’s significant potential for AI techniques to digitally transform control and prevention activities (CAPA), from initial due diligence to the ongoing monitoring stage and potentially beyond (e.g., setting, reviewing, and monitoring risks in comparison to risk appetite). These activities can help leverage available domain expertise from the user more efficiently and effectively to review the red flags and other content produced by Gen AI systems. Organizations are already making progressive changes to be ready to adapt to these new opportunities.

Moving forward with Gen AI in third-party management
 

In this section, we relate the growing opportunities of Gen AI to third-party management using three of the most popular types of use cases in organizations:

  1. Gen AI as the co-pilot
  2. Ensuring proportionate effort in due diligence and monitoring
  3. Filling the gaps in internal and external information

“Co-pilots” is a common term being used today to describe Gen AI powered virtual assistants. The concept of the co-pilot is a “human first” approach i.e., AI and ML tools help empower the work of human experts and augment their ingenuity in the work they do. However, at this point they can’t match the subtle nuances or reasoning capabilities of the human mind.

Many organizations have already implemented various risk-domain specific tools that help address the threats of specific types of risks. Tools to help address cyber risks, followed by data security and privacy are often the most common examples, followed by those that enable multi-jurisdictional legal compliance to ensure appropriate ethical, social or environmental behaviors within third-party ecosystems. However, most of these organizations often lack a robust unifying/coordinating mechanism to quickly look-up these multiple sources of information, interpret the messy reality and provide an instant response to a query by the risk manager in a human-like language.

In this way, these co-pilots powered by AI can potentially front-end the third-party management process, coupled with Natural Language Understanding (NLU). It could guide (human) users through the risk management process workflows, provide “just-in-time” explanation of the key risk management concepts applied and carry out assessments of actual risk against risk appetite etc. See the examples below.

Given the increasing scale of third-party ecosystems, it’s important to concentrate efforts (e.g., allocate limited resources) on the most important and/or high risk third parties. This requires the effective segmentation of the third-party population, not just at the time of initial due diligence and onboarding, but revisited continuously, as circumstances impacting inherent risk factors and controls effectiveness change. These inherent risks themselves need to be assessed by each relevant risk domain. The list of such applicable risk domains now goes far beyond traditional considerations, such as cybersecurity and data privacy, into emerging domains including environmental, social and ethical governance considerations.

To deal with such overwhelming volumes of internal and external data, the more progressive organizations have been digitizing their approach using AI-based questionnaires combined with risk intelligence, that incorporates real-time data based on the nature or location of the goods or services being delivered. This, in turn, enables these organizations to dynamically assess the residual risk (i.e., inherent risk remaining unmitigated even after considering relevant control activities), to ensure that this is within the risk appetite and is monitored appropriately (as shown in the example below).

Our 2021 publication entitled “Broadening the frame: The case for integrated third-party management” 

highlighted the organizational need for greater coordination and integration across third-party management functions such as sourcing, contracting, legal, financial management and risk management. But very few organizations managed to address this effectively due to the lack of accurate internal and external data that presented a single version of the truth. In our recent 2023 TPRM survey, Board members and leadership also indicated that they are now feeling the pressure when it comes to making the right decisions and answering to diverse stakeholders, due to such data gaps. 

Managing data related to multiple facets of third-party relationships is no doubt a highly labor-intensive activity involving cleansing, extracting, integrating, cataloging, labeling and organizing data, as well as defining and performing the many data-related tasks. 

In our experience, one of the less visible but highly impactful use cases of generative Gen AI is in improving data management as explained below. Using AI to put the quality and reliability stamp on data can help leadership and Board members understand what is within their control, what is not, and which are the factors that are easier to change (e.g., is the cheapest source of procurement encouraging unethical or unfriendly environmental practices) while having to embrace the other macro-economic challenges.

While most organizations are excited about the opportunities that AI and Gen AI can create, nearly all of them are conscious about the risks and challenges that accompanies this new technology and the way it is implemented. As with any other use of technology, the most fundamental risk is poor quality internal and external data underlying AI applications. As always, “garbage in results in garbage out” and it is essential to focus on the data first. In a similar way, organizations need to ensure that their AI algorithms and models are reliable (a chatbot using high quality data can also give poor answers if not engineered correctly). Explainable and trustworthy AI is also important when complying with applicable laws, regulatory, and other standards and internal policies. This must be continually monitored. Additionally, it is important to ensure that the AI system is not “hallucinating” to provide solutions that look real but are based on imaginary data. 

With the constantly changing nature of current business environments, it is no longer optimal for an organization to only understand their risk landscape at a static point in time, accentuating the need for continuous monitoring on an ongoing basis. An AI driven methodology, supported by a human layer of validation, helps to maintain this ongoing awareness, and provides actionable insights to mitigate adverse connections that may impact an organizations situational perspective and reputational standing. Whilst Gen AI can provide an increasingly efficient way to operate when monitoring at scale, it is not currently in a place whereby it can effectively operate in isolation without posing quality risks, including false positives or missed Service Line Agreements (SLAs).  

To be able to achieve this objective, an overarching (well-defined and documented) governance and risk management framework must be established covering both development, deployment, and use of AI in the organization. Particular attention needs to be paid to cybersecurity, privacy, and ethical and responsible use of AI, as failures in these areas continue to make headline news that challenge organizational reputation and eventually its performance and profitability. 

One other aspect of responsible use of AI models is to ensure that biases and discrimination are not enhanced or extrapolated but are eliminated through transparency of how a specific conclusion was reached in the system. Additionally, organizations should be using well-trained and trusted partners in establishing and managing their organizational AI ecosystems to set themselves up for success.  We believe such investments will go a long way in building and maintaining trust with diverse stakeholders including regulators, customers, suppliers, subcontractors and other critical or material third parties, in which every member of the organization has a positive role to play. A disciplined approach to risk management also ensures that the risks taken on this AI journey remain aligned to the organizational risk appetite.

It is quite clear from the use cases outlined above that the opportunities from using AI span different aspects of third-party management, from sourcing and procurement through to financial, legal and risk management, often with access to data held within multiple repositories or other organizational silos. However, ad-hoc and inconsistent approaches to governing such “departmental” AI projects initiated and managed by functional teams would certainly prevent organizational synergies. Our 2023 Deloitte publication entitled “Stuck on the curve” points out that these functional teams may themselves be at different places in their AI maturity, and therefore not all are seeking the same results from AI initiatives. This, in turn, would introduce further inconsistencies in the approach. The challenge is to bring initiatives, stakeholders, and the vision of AI value together, allowing the entire organization to move forward as one. This means articulating a common vision and definition of AI value, identifying use cases for impact, and sharing the insights for reusability and scale in a consistent and coordinated manner. We believe that progress made in achieving this ambition would define the level of organizational maturity in AI.

A “maturity model” portrays the degree of formality and optimization of processes related to the discipline in question. In this sense, maturity is also a measure of an organization’s “room for improvement” in a particular level. The uppermost level is a notional ideal state where processes (and underpinning technology platforms) would be systematically managed by a combination of continuous improvement and optimization. In line with this thinking, we propose the following four (simplified) maturity stages for AI initiatives using the crawl-walk-run-fly framework.

Organizations that embrace Gen AI will emerge as the clear winners in leveraging third-party relationships

We hope that this article has been useful to reinforce your understanding of how Gen AI transforms the way in which you manage your extended enterprise. As always, the future is already here, it’s just not evenly distributed. Only a small number of organizations have started reimagining their business strategies, operations and markets to optimize collaborative intelligence by judiciously blending human capital and AI. This has created a significant opportunity for the potential fast followers. 

But in this journey ahead, there is no one size that fits all. Each organization must carefully craft out its path for the most strategic use of AI, rather than just emulating others, recognizing the nuances of differing contexts or settings. Those that succeed in doing this well will be able to continually reposition themselves to lead in the marketplace by rethinking how they manage their extended enterprise. 

About this insight
 

This research-based article has been co-authored by Kristian Park, Global Lead Partner and Dr Sanjoy Sen, Head of Research for the Extended Enterprise team at Deloitte UK. Dr Sen is also a part-time co-director of the Masters program on AI and Business Strategy at Aston University, UK.  Other key contributors include Dan Kinsella, Partner, Risk and Financial Advisory, Deloitte & Touche LLP , Daniel Abichandani. Partner, Risk Advisory, Deloitte Canada and Shannon Pym, Senior Manager in the Intelligence as a Service (IntaaS) team at Deloitte UK. The research is cross-sectional in nature and entails collecting data to provide a” snapshot” at one point in time. It does not aspire to be a longitudinal study involving data collection at various points over an extended period of time.

To understand how you can apply the learnings from this article to create distinctive benefits for your organization in your journey towards supply chain resilience, please get in touch with one of the contacts listed below.