Insert Custom HTML fragment. Do not delete! This box/component contains code that is needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++

Proactive risk management in Generative AI

Generative AI ethics, accountability and trust

Download the full report
Proactive risk management in Generative AI

${headline-1}

New bots on the block

To this point, AI has broadly been used to automate tasks, uncover patterns and correlations, and make accurate predictions about the future based on current and historical data. Generative AI is designed to create data that looks like real data. Put another way, Generative AI produces digital artifacts that appear to have the same fidelity as human-created artifacts. Natural language prompts, for example, can lead the neural network to generate images that are in some cases indistinguishable from authentic images. For large language models that create text, the AI sometimes supplies source information, underscoring to the user that its outputs are factually true, as well as persuasively phrased. “Trust me,” it seems to say.

CIOs and technologists may already know that generative AI is not “thinking” or being creative in a human way, and they also likely know that the outputs are not necessarily as accurate as they might appear. Non-technical business users, however, may not know how generative AI functions or how much confidence to place in its outputs. The business challenge is magnified by the fact that this area of AI is evolving at a rapid pace. If organizations and end users are challenged just to keep up with generative AI’s evolving capabilities, how much more difficult might it be to anticipate the risks and enjoy real trust in these tools?

${headline-2}

${description-2}

${img-alt}

1 | Managing hallucinations and misinformation

A generative model references its dataset to concoct coherent language or images, which is part of what has startled and enticed early users. With natural language programs, while the phrasing and grammar may be convincing, the substance may well be partially to entirely inaccurate, or sometime, when representing a statement of validity, false. One of the risks with this kind of natural language application is that it can “hallucinate” an inaccurate output in complete confidence. It can even invent references and sources that are non-existent. The model would be forgiven as its function is to generate digital artifacts that look like human artifacts. Yet, coherent data and valid data are not necessarily the same, leaving end users of large language models to contend with whether an eloquent output is factually valuable at all.

There is also the risk of inherent bias within the models, owing to the data on which they are trained. No single company can create and curate all of the training data needed for a generative AI model because the necessary data is so expansive and voluminous, measured in tens of terabytes. Another approach then is to train the model using publicly available data, which injects the risk of latent bias and therefore the potential for bias in the AI outputs.

Insert Custom HTML fragment. Do not delete! This box/component contains code that is needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++

2 | The matter of attribution

Generative AI outputs align with the original training data, and that information came from the real world, where things like attribution and copyright are important and legally upheld. Data sets can include information from online encyclopedias, digitized books, and customer reviews, as well as curated data sets. Even if a model does cite accurate source information, it may still present outputs that obscure attribution or even tread across lines of plagiarism and copyright and trademark violations.

How do we contend with attribution when a tool is designed to mimic human creativity by parroting back something drawn from the data it computes? If a large language model outputs plagiarized content and the enterprise uses that in their operations, a human is accountable when the plagiarism is discovered, not the generative AI model. Recognizing the potential for harm, organizations may implement checks and assessments to help ensure attribution is appropriately given. Yet, if human fact-checking of AI attribution becomes a laborious process, how much productivity can the enterprise actually gain by using generative AI?

Insert Custom HTML fragment. Do not delete! This box/component contains code that is needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++

3 | Real transparency and broad user explainability

End users can include people who have limited understanding of AI generally, much less the complicated workings of large language models. The lack of a technical understanding of generative AI does not absolve the organization from focusing on transparency and explainability. If anything, it makes it that much more important.

Today’s generative AI models often come with a disclaimer that the outputs may be inaccurate. That may seem like transparency, but the reality is many end users do not read the terms and conditions, they do not understand how the technology works, and because of those factors, the large language model’s explainability suffers. To participate in risk management and ethical decision making, users should have accessible, non-technical explanations of generative, its limits and capabilities, and the risks is creates.

Business users should have a real understanding of generative AI because it is the end user (and not necessarily the AI engineers and data scientists) who contends with the risks and the consequences of trusting a tool, regardless of whether they should.

Insert Custom HTML fragment. Do not delete! This box/component contains code that is needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++

Insert Custom HTML fragment. Do not delete! This box/component contains code that is needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++

Let's talk

 
 
 
 
 
 
  Yes         No

Fullwidth SCC. Do not delete! This box/component contains JavaScript that is needed on this page. This message will not be visible when page is activated.

Insert Custom HTML fragment. Do not delete! This box/component contains code that is needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++

Insert Custom HTML fragment. Do not delete! This box/component contains code that is needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++

Insert Custom CSS fragment. Do not delete! This box/component contains code needed on this page. This message will not be visible when page is activated.
+++ DO NOT USE THIS FRAGMENT WITHOUT EXPLICIT APPROVAL FROM THE CREATIVE STUDIO DEVELOPMENT TEAM +++