Deloitte’s ‘Large Language Model Controls’ as GenAI Guardrails
Addressing users’ concerns of perceived lack of data privacy and safeguards are a great challenge that clients face in adopting Generative AI (GenAI) and Large Language Models (LLMs). AI-powered systems are transforming a multitude of sectors, but they also pose certain risks, such as data security, confidentiality leaks, regulatory compliance, and the potential for negative outcomes arising from AI-generated content. To mitigate such risks, Deloitte’s 'Large Language Model Controls' provides modular and customizable controls in addition to the services offered by Cloud providers.
Built to detect, prevent, and minimize undesirable behavior of models in GenAI applications, Deloitte’s 'Large Language Model Controls' acts as your AI model security officer. It is designed in line with the Deloitte Trustworthy AI™ framework to enable end-to-end risk management for GenAI/LLMs. Once integrated with the client’s GenAI applications, it serves as a valuable component to supplement client applications, ensuring data protection, bias reduction, and output quality. Furthermore, it ensures that AI-generated content remains accurate, clear, and helpful, aligning perfectly with our clients’ standards and objectives.
'Large Language Model Controls' offers a suite of 9 customizable controls such as Adversarial Attack Filters, PII Confidentiality Masking, Citation Retriever, Toxicity-Bias Control, RAG Evaluation, Prompt Augmentation, Randomness Detector, and Gibberish Filter. Clients utilizing Deloitte’s 'Large Language Model Controls will have the capacity to reduce risks, ensure compliance, and responsibly use AI generated content. This enhances trust and strengthens confidence in the use of GenAI technologies.
Neues Fenster öffnen