With the growing adoption of Generative AI (GenAI) fuelled by the excitement of the opportunities that it presents, the potential risks associated with this new and emerging technology are now a board-level topic. Due to the costs, complexity and scalability challenges of establishing GenAI capabilities in-house, organisations are increasingly relying on third party providers to realise value through innovative GenAI services and solutions. The prevalence of GenAI across third party portfolios will create a variety of strategic dependencies and risks that will require significant updates to an organisation’s third party risk management (TPRM) programme. In this analysis, we will explore the different levels of third party involvement in GenAI solutions, the types of risks associated with third party GenAI, and how to manage these risks across the third party lifecycle.
There is often a discussion in industry conversations on whether to ’build or buy’ i.e. whether to invest in developing a custom in-house GenAI solution or to acquire the capability through a vendor. Despite this common framing, this is not a binary decision.
There are several levels of third party involvement in GenAI solutions, ranging from fully in-house solutions (no third party at all) to fully third party solutions. Each type of solution brings a different risk profile associated with the level of influence and scope of the organisation’s control over the GenAI solution. The table below describes the seven types of GenAI as a service and the two types of in-house GenAI.
GenAI provider |
Type of solution |
Description |
Risk profile |
---|---|---|---|
Third party |
|
|
|
Hybrid |
|
|
|
In-house |
|
|
|
There are several distinctions that are important in this table.
First, open access GenAI platforms (e.g. Claude, ChatGPT, DALL-E, and Gemini) should be distinguished from enterprise GenAI platforms (e.g. Deloitte PairD) that leverage GenAI but operate under the organisation’s contractual agreement on data privacy, security and scalability. Several organisations have banned open access GenAI platforms due to concerns over internet protocols and data protection but have launched an enterprise chatbot to improve productivity. The policy on internal usage of open access GenAI vs. enterprise GenAI should be clear, especially on types of data an employee is permitted to share in each platform.
Second, full-service GenAI providers should be distinguished from GenAI-enabled providers. The former is specifically engaged for its GenAI capabilities to automate or support the delivery of a specific product, process or service. One example is a GenAI solution for the automated production of draft contracts where GenAI is the core of its unique selling point. The latter is a solution that may not have traditionally involved GenAI but is now augmented by GenAI. An example of a GenAI-enabled provider is an end-to-end recruitment workflow solution that uses GenAI in CV screening to automate the shortlisting of candidates for a given role. For the GenAI-enabled solution, it is important to assess whether GenAI is built in-house or uses another fourth party, which may require additional due diligence.
Third, not all GenAI-enabled providers may voluntarily advertise their GenAI usage. In the aforementioned recruitment solution, the vendor may or may not disclose that the CV screening uses GenAI. Sometimes called a ’covert’ or ’hidden’ GenAI, these solutions can pose risks that are unknown to the buyers who fail to identify that GenAI is being used.
Fourth, a full-service GenAI solution should be distinguished from GenAI APIs used as building blocks for a hybrid solution with both third party and in-house components. It is common to use GenAI APIs, such as GPT, Gemini, and PaLM, to outsource only the GenAI component but completely customise the use case and architecture around the APIs. The GenAI API may be used as-is, or the model may be ’fine-tuned’ or tweaked to a specific use case. In contrast to full-service solutions, here it is up to the in-house developer to build the guardrails, feedback mechanism, KPI/KRIs, conversation and audit logs, and the monitoring mechanism. While organisations may decide to build these for full-service GenAI solutions above and beyond what is offered by the third party, it is especially important that API-driven solutions and fully-managed third party solutions are held to comparable standards in mitigating GenAI risks. If a third party utilising GenAI to deliver its service has a lower standard of control, then it is likely that an organisation will want to consider mitigating actions such as additional internal controls or more rigorous and frequent oversight to ensure alignment with risk appetite. In our experience, there is greater uncertainty and reduced control associated with fully managed third party solutions. This has led to some organisations attempting to hold these providers to higher GenAI standards, compared to hybrid third party and in-house solutions. In this emerging area of fully managed third party solutions, we anticipate there may be confusion in the split of roles and responsibilities between the third party risk team and the in-house AI governance team. Such confusion and inconsistencies in controls between third party types can result in misalignment with an organisation’s risk appetite.
Finally, open source GenAI carries risks that are distinct from other third party GenAI. Modifying an open source GenAI gives organisations full control over their data. This means sensitive information stays within their network and reduces the risk of data loss or unauthorized access. Open source Large Language Models (LLMs), such as Falcon-40B, Llama-3, etc., offer greater transparency in architecture, training data, and methodologies compared to proprietary LLMs. There may be cost trade-offs compared to proprietary LLMs due to a lack of licence fees but a potential increase in cloud or on-premise infrastructure costs and the significant initial investment required in time and effort to build.
As such, it is insufficient in a GenAI risk assessment to create a binary classification of in-house vs. third party. It is important to understand the level of control an organisation has over each type of third party GenAI system to design appropriate risk assessments, due diligence, and contractual obligations. This may require expertise in AI beyond what is typically housed in a third party risk team. Subject matter experts may provide insight on the role of AI in a third party solution and the necessary due diligence and contracting requirements.
Due to the limited control over and access to third party GenAI systems, organisations may inadvertently develop a double standard in how it governs in-house GenAI compared to how it governs third party GenAI. However, the risks, regardless of the nature of the external GenAI input, remains with the organisation. It is important to enforce consistency in risk management across the in-house and third party GenAI systems to remain aligned to the organisational risk appetite.
Organisational reliance on third party GenAI (of any type mentioned above) can create risk and dependencies that need to be understood, assessed and managed based on the impact to the organisation of a third party issue, incident or failure.
The lack of visibility and control an organisation has over third parties delivering GenAI as a service can result in the failure to:
The GenAI risks and their potential impact factors include but are not limited to the following major categories: i) legal and regulatory compliance risk, ii) operational and technology risk, iii) security risk, iv) privacy risk, v) reputational risk, and vi) model risk. Below are examples of questions that an organisation can ask itself to help identify the extent of risk associated with each category in a given scenario.
By including such questions in the GenAI TPRM process, organisations can start to identify any strategic dependencies and quantify the business impact of any potential risk. This can inform whether the procurement of the third party for any use case is aligned to the overall organisational risk appetite.
To manage the risks associated with third party GenAI solutions, it is important to have a robust TPRM programme in place. Most organisational TPRM programmes will require an update to include the assessment of risks that are unique to GenAI as a service.
Here are our recommendations across the TPRM lifecycle:
While these activities may already be standard across organisations with robust TPRM frameworks, it is important to identify the incremental changes required to ensure the TPRM programme is fit for purpose for risk assessing and managing the use of third party GenAI solutions.
In conclusion, while third party GenAI solutions can provide many benefits, organisations must carefully assess and manage the associated risks to ensure that the use of GenAI is responsible, risk managed, and aligned with the organisation's values and strategic objectives.