The health care industry thrives on trust— trust in providers and payers, medicines and therapies, supply chains, and all the technologies that enable enterprise operations and delivery of care.
Many business leaders likely know well the sobering examples of health care AI deployments that introduced risks effecting enterprise trustworthiness and even health equity.
With the arrival of a new kind of AI—Generative AI—the stakes are even higher, and health care leaders are appropriately concerned about the risks this type of AI could create. Already, 75% of leading health care companies are experimenting with Generative AI or attempting to scale use cases, according to Deloitte’s 2024 Life Sciences and Health Care Generative AI Outlook Survey. This activity comes in tandem with the adoption of complementary technologies, including cloud computing, data modernization and analytics, and the Internet of Things (IoT). Instead, a digital transformation as a whole is what can permit hospital and health care systems to create truly differentiating Generative AI use cases.
The process for developing new treatments can be both time-consuming and costly. Bringing Generative AI to bear, medical data can be queried for patterns or anomalies that point toward a discovery, and drug compounds can be simulated in a virtual environment to rapidly validate drug candidates without needing to manufacture them in the real world. The potential benefit is not only new insights but the capacity to transform drug discovery, permitting scale and speed that eclipses human capabilities. What is more, in a time of inflation, shrinking margins, and rising costs for supplies and labor, technologies like Generative AI can fuel innovation while limiting the costs associated with more traditional drug discovery approaches.
Generative AI-enabled tools and products point toward a future where medical professionals could leverage machine capabilities to achieve something greater than what either could do independently. But in this age of human-machine collaboration, Generative AI outputs require human validation, particularly for something as impactful as diagnosis and treatment recommendations. The reality is that all Generative AI models are subject to errors or inaccuracies (by virtue of how Generative AI works), and trust in the viability and value of these models hinges in part on how the human stakeholder assesses and validates the outputs.
Greater time- and resource-efficiencies support operations and move toward improved customer service, which can enhance the patient experience and promote trust in the organization. This view of Generative AI as a component of a technology ecosystem that collectively supports trustworthy, ethical deployments can orient decision making and investments as health care systems pursue digital transformations.
The COVID-19 pandemic revealed the complexity and fragility of the global supply chain. By consuming data around geographic characteristics, disease prevalence, socioeconomic factors, and logistical realities, Generative AI can be used to create micromarket demand forecasts and optimized supply chain strategies. This could have grand implications not just for the delivery of care but also cost efficiency for patients, health care organizations, insurers, and governments.
Identifying and understanding Generative AI risks can help organizations make incisive decisions around how to promote trust in technologies and build a strong foundation for governance and compliance.
Deloitte leverages the Trustworthy Generative AI framework to evaluate technology use cases, both internally and with our clients. One of the reasons a framework is important is that many Generative AI deployments are inherently new and unique to the organization deploying them. Even open-source models with well-understood limitations (e.g., model “hallucination”) tend to present new risks and trust implications when trained on enterprise data in a secure environment. In short, no two organizations are the same, nor are their Generative AI deployments.
As health care organizations move forward with Generative AI and construct a framework for trust, ethics, governance, and compliance, some of the important questions include:
This is a moment for innovation and creativity, where Health Care organizations can look beyond the status quo and imagine how market needs and technology maturity can open the door to entirely new services, products, business models, and approaches to patient care. By accounting for risk, promoting trust, and ensuring compliance, health care enterprises can extract the greatest potential from Generative AI, for their organizations, and the patients they serve.