Generative AI (GenAI) presents many opportunities for organizations. It is particularly suited for process optimization of highly repeatable and data-centric tasks and tasks related to product development, customer experience enhancement, and predictive analytics. One key to assessing what opportunities may be available to your organization is to (1) understand how your company goes about doing its business; (2) identify tasks in those key workflows that are amenable to AI optimization; (3) assess the potential risks associated with those use cases; and (4) if those risks can be managed within your organization’s risk tolerance, implement AI in a safe, reliable, and compliant manner.
Many of the risks associated with AI are not unique. For example, the use of GenAI poses sustainability, data privacy, and security risks, to name a few. With that said, the National Institute of Standards and Technology (NIST)—the organization responsible for promoting innovation and industrial competitiveness by advancing measurement science, standards, and technology—finds that “AI systems also bring a set of risks that are not comprehensively addressed by current risk frameworks and approaches” [emphasis ours].1 NIST and others have identified more than a dozen risks unique to AI or uniquely intensified by AI.
Categories of GenAI risk vary based on the industry and specific use case, but can include regulatory uncertainty, product liability, fraud/misrepresentations, limited AI fluency and unharmonized terminology, fragmented internal safeguards, intellectual property, confidentiality, AI systems quality concerns, data privacy, cybersecurity, lack of accuracy, unfair bias, insufficient contractual protections, unclear procurement standards, and reputational harm.
Ethical tech
As the capabilities of GenAI expand, the CLO and the legal function may be called upon to provide guidance on legal and regulatory issues with AI and the ethical use of technology. A technology or its use is ethical when principled thinking has guided its technological design, delivery, and implementation.2
Two examples of ethical risks CLOs may need to address are accuracy risks and tampering risks. Hallucinations are one type of accuracy risk and occur when a GenAI model provides a coherent answer with complete confidence that is wholly or partially invalid based on the data it has been trained on. When a model hallucinates, it may invent references and sources that are nonexistent.3
A second type of ethical risk is tampering risk. Deepfakes are an example of a tampering risk. These highly realistic fake or “synthetic” images and videos often intentionally created to fraudulently misrepresent what a person said or did.4 When deepfakes look and sound real, the public is more likely to believe the images and videos are real, and the potential for reputation damage increases.5
An ethical tech framework such as the one listed below may be useful in evaluating both long-standing risks and risks that have become more common as GenAI has been more broadly adopted. For example, organizations may use the framework to evaluate how it can best protect itself from tampering risks and inadvertently sharing fake videos or photos. Legal functions may also use the framework to evaluate accuracy risks and whether controls in place to monitor for hallucinations are sufficient to minimize the risk of harm. A framework for assessing the application of these technologies can be applied even without deep technical expertise.
Learn about other areas of Generative AI and how it impacts CLOs and their teams. From the basics to the more complex challenges, these resources are designed to help you navigate GenAI’s legal implications and risks with ease.
1 National Institute of Standards and Technology (NIST), Artificial Intelligence Risk Management Framework (AI RMF 1.0), January 2023.
2 Deloitte, Proactive risk management in Generative AI, 2023.
3 Ibid.
4 Jeff Loucks, “Deepfakes and AI,” October 26, 2018; Ali Swenson, “FEC moves toward potentially regulating AI deepfakes in campaign ads,” Associated Press, August 10, 2023.
5 Richard Torrenzano, “Generative A.I. has supercharged the speed at which false information spreads. Can our reputations survive the ‘two-hour internet day?’,” Fortune, July 18, 2023.
This document contains general information only and the authors are not, by means of this document, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This document is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional advisor.
The authors shall not be responsible for any loss sustained by any person who relies on this document.
As used in this document, “Deloitte” means Deloitte Financial Advisory Services LLP, which provides risk and financial advisory services, including forensic and dispute services; and Deloitte Transactions and Business Analytics LLP, which provides risk and financial advisory services, including eDiscovery and analytics services. Deloitte Transactions and Business Analytics LLP is not a certified public accounting firm. These entities are separate subsidiaries of Deloitte LLP. Please see www.deloitte.com/us/about for a detailed description of our legal structure. Certain services may not be available to attest clients under the rules and regulations of public accounting. Deloitte does not provide legal services and will not provide any legal advice or address any questions of law.
Copyright © 2025 Deloitte Development LLC. All rights reserved.
Copyright © 2025 DLA Piper. All rights reserved.