Mark Urbanczyk

United States

Dr. Juergen Klenk

United States

Alison Muckle Egizi

United States

Joe Mariani

United States

Federal health agencies stand on the cusp of leveraging artificial intelligence to markedly improve resource efficiency and accelerate innovation.1 AI innovation can make a big difference for federal health agencies by streamlining administrative processes and optimizing expenses, freeing up scientists to focus on breakthrough research instead of administrative tasks and enabling clinicians to spend more time with patients.2 The first drug developed using generative AI has completed Phase 1 human clinical trials.3 And a new AI model has the potential to address up to 17,000 diseases with insufficient treatment options by repurposing existing drugs for “off-label” use. This alone offers new hope to patients with rare diseases.4

While the potential value seems clear, federal health agencies face challenges with scale, with many AI-based efforts stuck in the pilot phase.5 Deloitte’s 2024 State of Generative AI in the Enterprise survey found that over two-thirds of surveyed organizations had transitioned less than a third of the AI initiatives into full-scale production.6

However, scaling AI is important if agencies are to achieve both efficiency and mission goals. Scaling AI could mean that everybody who could benefit from using an AI tool can do so. In some cases, that may mean enterprisewide implementation; in others, it could mean targeted implementation with just the team that performs a specialized function where the tool could generate maximum value. For instance, an agency might adopt an advanced AI diagnostic tool selectively, using it primarily in specialized clinical settings where it improves patient care. Yet, regardless of the approach, the foundation to support scalable AI should be solidly in place.

Having said that, leaders can’t simply flip a switch to accelerate AI integration. Federal health agencies face obstacles, including data-quality challenges, interoperability issues, insufficient resources, and security hurdles.7 But there’s a path forward for agencies to move beyond experimentation and create an environment to deploy AI-based initiatives at scale safely, securely, and effectively. By scaling AI strategically, federal health leaders can turn resource limitations into catalysts of innovation, potentially increasing the quality and efficiency of government services.

This article illustrates how AI at scale can improve job experience and performance for common federal health roles. We also outline five important capabilities to achieve AI scalability: data, ethics, infrastructure, workforce, and leadership. Success is about more than adopting new technology; it is about improving health outcomes, making smarter use of resources, and expanding operational capacity.

Envisioning AI at scale in federal health

To assess the value of using AI at scale for the health sector, we looked at four federal health agency roles—grants manager, scientist, HR coordinator, and clinician—and envisioned what these roles could look like with fully functional AI-based technologies integrated into daily work.

1. The government grants manager

Government grants managers generally facilitate federal research and innovation and support programs. But with data to process and stakeholders to consult, award selection and management can become burdensome. That’s where grants managers can bring in AI as a game-changer, using it to strategically allocate attention and resources, help triage grant application reviews, improve efficiency of grant compliance and risk monitoring, and personalize applicant support (figure 1).

Grants managers in federal health agencies are already beginning to harness the power of AI to boost performance. The National Institutes of Health (NIH) introduced an automated, AI-driven referral tool in 2022, designed to aid in the assignment of review applications. Leveraging data from previous review cycles, the tool streamlines the process and eliminates manual work by identifying duplicate submissions and directing applications to the appropriate review branches.8

What is possible with AI at scale?

These innovations mark progress yet represent only the beginning of AI’s transformative potential. At scale, AI can support efforts to reinforce grants managers as strategic leaders of scientific progress and innovative government programs and activities. Empowered by AI, grants managers can better deliver proactive technical assistance and continuous quality improvement, and enable real-time identification of fraud, waste, and abuse to improve outcomes for people. Grants managers can also use AI to facilitate adaptive grants management, where grants flexibly evolve in real time as new data emerges or breakthroughs occur.

In this reimagined landscape, grants managers can reshape the way grants deliver meaningful change— achieving more science and even better science, programs, and activities while maximizing resource efficiency (figure 2).

2. The government scientist

Whether it’s developing next-generation diagnostics, testing new therapies, or evaluating drug safety, government health scientists generate new knowledge and evidence that can protect and improve the nation’s health. Scientists can augment roles by streamlining workflows, improving the speed of data interpretation, and enabling greater collaboration, with the help of AI. This should allow them to focus on innovative thinking and hypothesis generation rather than computation. There is also potential in drug discovery, mining massive data sets to identify hard-to-find targets,9 and reducing the time and costs of clinical trials through the use of AI.10 One life science breakthrough used a gen AI model to help solve a protein-folding problem that plagued scientists for half a century.11 The researchers were recognized with the 2024 Nobel Prize in Chemistry.12

Federal health leaders are already leveraging AI to facilitate the scientific process. The NIH’s TrialGPT tool helps improve the match of patients to clinical trials.13 The Food and Drug Administration is developing an AI-based computer assessment tool to improve the accuracy and consistency of drug labeling.14 An NIH study developed an AI tool aiming to match patients to cancer drugs by analyzing molecular characteristics of tumors and predicting which treatments might be most effective.15 More and more, AI innovations are augmenting federal health-led scientific research (figure 3).

What is possible with AI at scale?

Scientists do hit roadblocks—encountering unexpected data patterns, combing through massive data sets, or hitting dead ends pursuing answers to complicated problems. During those times, scientists can use AI at scale to propose alternative protocols, identify overlooked data, or highlight new variables to test. Advanced analytics have the potential to integrate cross-disciplinary data, approaches, and insights, sparking novel experimental angles and breakthrough discoveries. At scale, future scientists can use AI to continually improve both ideas and methods, fueling new pathways to innovation (figure 4).

3. The human resources coordinator

AI can support the optimization of human resources coordination tasks, such as explaining benefits and analyzing employee profiles to trace skill gaps. Federal health agencies are already taking decisive steps to incorporate AI tools into workforce planning and management. For instance, the Department of Health and Human Services (HHS) has already developed standard AI position descriptions and job analyses using the Office of Personnel Management’s (OPM) Direct Hire Authority (DHA) framework.16 AI tools are helping to transform HR functions, freeing up HR coordinators to focus more time on strategic workforce planning (figure 5).

What is possible with AI at scale?

Among other aims, HR coordinators seek to improve employee engagement, optimize hiring processes, and connect the right people with the right training. AI at scale can not only help minimize administrative burdens and free up more time for strategic thinking, but it can also help HR coordinators create personalized employee talent journeys—from onboarding to exit—and design custom learning and development opportunities (figure 6).

4. The clinician

Clinicians across federal agencies tend to balance patient care, research, and regulatory responsibilities while navigating increasing administrative demands. The health care industry has already been saving time and reducing costs with the help of AI.17 Using AI, clinicians can reduce time spent on documentation, improve medical imaging, prevent hospital readmissions, and save time on emails (figure 7).  

In December 2024, the Department of Veterans Affairs (VA) expanded its testing of Medtronic’s GI Genius system to help improve colorectal polyp detection during colonoscopies.18 The program is now installed in 140 VA facilities.19 Additionally, the Veterans Health Administration (VHA) is piloting ambient scribe technology to help reduce physician burnout by recording patient visits and automatically generating visit summaries for clinician review.20 Approved notes will be directly incorporated into electronic patient health records, which is expected to reduce documentation time.21 VHA is also exploring gen AI applications to assist with drafting emails, summarizing policy documents, and analyzing survey data, aiming to streamline tasks across the department.22

What is possible with AI at scale?

AI is already helping to transform patient care by streamlining clinician workflows, enhancing diagnostic accuracy, and reducing documentation burdens. And its potential at scale goes even further. Advanced, real-time decision support systems could provide clinicians with more tailored insights—from early diagnostic signals to personalized treatment recommendations—while preserving the clinician’s essential role in patient care. To improve the success of AI implementations at scale, agencies can include clinicians at the center of the process.23

In this vision, clinicians can use AI tools to aggregate and analyze patient data. This can help surface potential issues, optimize resource allocation, and improve care coordination in busy clinical environments. Supported by AI, clinicians of the future could achieve better lifetime care and continuous care innovation (figure 8).

Realizing the future of AI in federal health: Five capabilities for scale

These four personas—grants manager, scientist, HR coordinator, and clinician—point toward a future in which AI has the potential to dynamically shift how federal health work is done.24 But how can an agency scale a system designed to serve thousands or millions of people while safeguarding the population? We propose rethinking the fundamentals of readiness through five capabilities for scale: data, ethics, infrastructure, people, and leadership.

Data, data, and more data

AI models are built on data, but in federal health agencies, data fragmentation, inconsistency, and insufficient information, make it difficult to train reliable models.25 Health data is scattered across electronic health records, insurance claims, and research databases,26 often in incompatible formats and with variable definitions. Even within agencies, legacy systems can complicate seamless data integration.27

Data challenges in health care have been persistent28 and can raise concerns, since AI trained on incomplete or outdated data can lead to misdiagnoses, resource misallocation, or ineffective policy decisions.29 Federal agencies recognize this issue and are taking steps to improve data quality. Initiatives such as the Department of Health and Human Services’ Health Data Initiative30 aim to create more standardized and interoperable health data ecosystems. But the health data landscape remains complex, and it can be challenging to optimize AI utilization at scale. To help improve federal health data readiness for scaling AI, agencies can:

  • Prioritize targeted data improvement efforts. Instead of looking for a comprehensive, centralized data overhaul to become available, agencies can focus on incremental, high-impact fixes such as assembling a cross-agency task force to define priority data sets for AI applications. This is akin to the Testing Risks of AI for National Security Taskforce, which brings together federal agencies to implement safe and secure AI innovation.31 Agencies can also mandate real-time data quality checks that flag inconsistencies such as mislabeled and ambiguous data.32
  • Consider federated AI to work around data fragmentation. Since health data may not be easily or safely centralized, agencies could invest in federated AI, which allows models to learn from distributed data sets without moving or exposing patient data.33 This tends to reduce privacy risks and eliminate the need for data-sharing agreements. Some pilot programs—such as the National Institutes of Health’s federated learning initiatives in cancer research—are already testing this approach.34 Scaling these efforts could accelerate AI adoption without waiting for perfect data integration.

Build ethical guardrails to help manage risks  

While AI offers potential to improve the nation’s health and streamline federal health operations, it can introduce risks, including skewed decision-making, data security vulnerabilities, and a lack of transparency.35 In a recent forum, AI leaders across government and industry were torn between trial and error and structured oversight-based approaches to governance.36 In 2024, Deloitte conducted a survey of 2,770 global business leaders, in which only 23% of respondents rated their organizations as highly prepared in the areas of AI risk and governance.37 Without deliberate governance, these risks could undermine patient safety. To help manage risks, federal agencies can:

  • Partner with end users in the AI system design process. When technology implementation reserves end-user input for late-stage testing, it risks failing to achieve goals. Clinicians, researchers, data scientists, and grant managers who rely on AI tools can play a central role in shaping them if those tools are to meet promises of improving efficiency and outcomes. Human-centered design can help agencies tailor AI strategies to meet real-world needs and can reduce risks.
  • Incorporate trust into every stage of AI development. Agencies can look to build AI systems to be transparent, accountable, and aligned with ethical standards. Integrating trustworthy AI principles early can prevent risks from manifesting downstream and can scale AI solutions in a way that is both responsible and sustainable. In 2023, Veterans Affairs approved an agencywide trustworthy AI framework that served as the foundation for safe and secure AI implementations and compliance across the entire agency.38
  • Strengthen privacy and security safeguards. AI systems handling sensitive health data must comply with the Health Insurance Portability and Accountability Act and the Federal Information Security Management Act. Agencies could implement zero-trust architectures to minimize unauthorized access and enhance cybersecurity. The Office of Management and Budget has already issued directives in this regard.39 Agencies could also establish AI ethics review boards to evaluate high-risk AI deployments in health care settings.
  • Align AI regulation with rapid technological advancements. AI innovation is outpacing regulatory frameworks, creating potential gaps in oversight.40 While federal agencies are both AI adopters and regulators, agency leaders can proactively collaborate with policymakers to refine policies that balance innovation with safety. The procurement processes of AI tools could accommodate iterative development, testing, and real-world validation.

Infrastructure is the foundation

Without a strong infrastructure in place, agencies cannot implement a scalable AI model.41 Skipping out on it can lead to disjointed systems, inefficiencies, and security vulnerabilities. AI systems need to be built, tested, and deployed with reliability, security, and interoperability in mind. To help build durable infrastructure for scale, federal health agencies can:

  • Invest in centralized AI marketplaces to standardize and scale. Rather than building AI tools in silos, agencies can establish centralized AI marketplaces within respective agencies—platforms on which vetted, approved models can be shared across agencies. This tends to reduce duplication, standardize security and privacy checks, and accelerate AI deployment. Such platforms are already becoming a reality in government. The Department of Defense’s Chief Digital and Artificial Intelligence Office has created a marketplace for AI models and technology. It is intended to accelerate and streamline the  acquisition and use of AI models and technologies.42
  • Embed security and auditability into AI infrastructure. Security should not be an afterthought. AI infrastructure should have built-in safeguards, such as automated privacy checks incorporated into platforms to maintain compliance with regulations. Auditability features could track AI decisions and data usage for accountability.
  • Build for future expansion. Infrastructure should support long-term AI advancements. A scalable system tends to demand interoperability and modular, adaptable AI platforms that can integrate with emerging technologies.

Workforce needs for scalable AI

Scaling AI in federal health agencies appears to be a workforce challenge as well as a technology challenge. AI tools can support decision-making, not replace human capabilities. For AI to be truly effective, agencies can create workforce strategies that provide appropriate training and oversight based on job roles. To help support a workforce for scalable AI, federal health agencies can:

  • Tailor AI training for different roles. Not every federal health employee needs to be an AI expert, but it could benefit an organization to have all staff fluent in how to leverage AI for the job role. A successful workforce strategy may feature training tailored to specific job roles and requirements.
  • Embed AI literacy at every level. AI literacy can go beyond technical skills. Agencies can build training programs that teach critical evaluation of AI outputs, so staff understand models’ uncertainty and verification methods. The Department of Health and Human Services’ planned AI Corps program aims to position AI experts in each of the department’s agencies to bolster expertise.43
  • Build a culture of confidence, not blind trust. A well-trained workforce shouldn’t blindly trust AI tools or results—it should question and work to refine the technology. In this sense, it is important for agencies to consider cultivating a culture in which AI is seen as a decision-support tool, not a decision-maker, and in which employees feel empowered to escalate concerns. AI deployments can and should prioritize transparency and explicability.

Leadership: Align vision with action

Effectively scaling AI involves leaders thinking beyond the technology and focusing on how it aligns with agency missions. Initiatives can directly support agency goals, such as improving health outcomes, enhancing service delivery, or advancing scientific research. Leaders who view AI as a strategic asset to achieve specific mission objectives—not as an end in itself—could be better positioned to drive meaningful, long-term impact. Strong leadership is an important factor involved in long-term transformation—to unlock AI’s full potential, leaders should:

  • Set a clear vision for how AI integrates into the agency’s strategy. Articulating measurable goals and communicating this vision across all organizational levels tends to foster clarity, collaboration, and commitment among stakeholders. Hiring chief AI officers in various federal agencies can help embed AI into operations.44
  • Track progress and measure mission impact. Dashboards can translate complex metrics into actionable insights, enabling leaders to assess which AI tools are most effective in delivering mission objectives. By evaluating tools against predefined metrics for mission impact—such as advancing the quality of patient care or boosting operational performance—leaders can prioritize scaling initiatives that help provide the greatest value.
  • Evaluate costs dynamically. AI tools can provide leaders with real-time insights into resource usage patterns, enabling them to proactively adjust allocations and optimize spending as demand scales or fluctuates.

Agentic AI: The next leap in AI readiness

With these foundations in place, federal health organizations can not only reap the benefits of AI today but also be ready for the next big leap in AI itself. Forward-thinking government leaders are likely looking beyond using AI to automate single administrative tasks. The next step—and perhaps the future of AI at scale in federal health—is agentic AI.

AI agents are the conductors of the AI world. Where a single task could be automated by AI, AI agents can coordinate the work of several other automation tools and collaborate with human experts to remake entire workflows. This can allow federal health leaders to augment more complex workflows using AI. Federal health agencies may look to agentic AI to provide real-time answers to constituents, personalize employee benefits planning, and propose grant changes to improve outcomes based on live data.45 These AI agents can continuously scan transactions, flag anomalies, and check resource use for efficiency.

Deloitte analysis of federal Occupational Information Network data suggests that AI agents carry the potential to reshape grants management: In the Department of Health and Human Services alone, AI automation may be able to save more than six million hours—approximately 40% of the total hours that the federal grant application review process usually takes.46 AI agents may soon be able to aid data analysts, engineers, physicians, and psychologists, creating a more adaptive, holistic approach to federal health funding.47

AI readiness in federal health could mean preparing for a workforce that collaborates with AI agents across multiple roles in the future. Grants managers may be able to shift from paperwork to strategy, using agents to help inform better funding decisions. With AI agents, scientists can accelerate discoveries by processing vast data sets, identifying patterns, and even assisting in generating breakthrough innovations in vital research areas where science has been stuck for decades. HR coordinators can use agents to augment hiring, reducing time to hire and optimizing workforce planning. And clinicians can integrate AI agents into diagnostics and care coordination to get better insights on patient care and offer a more streamlined patient experience.48

The Advanced Research Projects Agency for Health has been exploring the agentic AI landscape and ways to integrate AI agents into federal health operations. 49 This could signal a broader shift toward autonomous decision-making systems that are expected to enhance efficiency, accuracy, and patient outcomes.50

Federal health agencies can lead the change today

AI at scale has the power to support federal health agencies in combating health challenges facing Americans today. Federal health agencies are already reaping the benefits of AI by accelerating review time for new therapies, advancing scientific discovery, and cutting time on administrative tasks like drafting emails and summarizing documents.51

As AI appears poised to transform health nationwide, a key challenge for agencies will be successfully moving from experimentation to scaled deployment. Scaling AI means more than adopting new tools. It involves removing obstacles to innovation and laying the groundwork with solid infrastructure, which agencies can build on, powering resource efficiency across the health system.

The pace of AI adoption in government appears to be accelerating, and federal health agencies have a chance to lead this transformation. By taking deliberate steps, federal health agencies can confidently scale AI in ways that push the boundaries of innovation and advance human flourishing, both now and in the years to come.

By

Mark Urbanczyk

United States

Dr. Juergen Klenk

United States

Alison Muckle Egizi

United States

Joe Mariani

United States

Endnotes

  1. Russell T. Vought, “Memorandum for the heads of executive departments and agencies,” The White House, April 3, 2025.

    View in Article
  2. Ophelie Lavoie-Gagne et al., “Artificial intelligence as a tool to mitigate administrative burden, optimize billing, reduce insurance- and credentialing-related expenses, and improve quality assurance within health care systems,” Arthroscopy (2025); President’s Council of Advisors on Science and Technology, “Report to the President on supercharging research: Harnessing artificial intelligence to meet global challenges,” April 2024; Lisa D. Ellis, “The benefits of the latest AI technologies for patients and clinicians,” Harvard Medical School, Aug. 30, 2024.

    View in Article
  3. Hayden Field, “The first fully A.I.-generated drug enters clinical trials in human patients,” CNBC, June 29, 2023; Feng Ren et al., “A small-molecule TNIK inhibitor targets fibrosis in preclinical and clinical models,” Nature Biotechnology 43 (2024).

    View in Article
  4. Ekaterina Pesheva, “Researchers harness AI to repurpose existing drugs for treatment of rare diseases,” Harvard Medical School, Sept. 25, 2024. 

    View in Article
  5. Edward Van Buren, William Eggers, Tasha Austin, Joe Mariani, and Pankaj Kishnani, “Scaling AI in government: How to reach the heights of enterprisewide adoption of AI,” Deloitte Insights, Dec. 13, 2021.

    View in Article
  6. Jim Rowan, Beena Ammanath, Costi Perricos, Brenna Sniderman, and David Jarvis, “State of Generative AI in the Enterprise: Quarter three report,” Deloitte, August 2024.

    View in Article
  7. Rishi P. Singh, Grant L. Hom, Michael D. Abramoff, J. Peter Campbell, and Michael F. Chiang, “Current challenges and barriers to real-world artificial intelligence adoption for the healthcare system, provider, and the patient,” Translational Vision Science and Technology 9, no. 2 (2020); Madison Alder, “Stanford report: Despite federal AI progress, barriers to governance persist,” FedScoop, Jan. 17, 2025. 

    View in Article
  8. Citi Program Staff, “Enhancing NIH grant management operations with AI and digital technology,” CITI Program, Nov. 21, 2024. 

    View in Article
  9. Benquan Liu, Huiqin He, Hongyi Luo, Tingting Zhang, and Jingwei Jiang, “Artificial intelligence and big data facilitated targeted drug discovery,” Stroke and Vascular Neurology 4, no. 4 (2019).

    View in Article
  10. Ronald Chow et al., “Use of artificial intelligence for cancer clinical trial enrollment: a systematic review and meta-analysis,” JNCI: Journal of the National Cancer Institute 115, no. 4 (2023).

    View in Article
  11. Josh Abramson et al., “Accurate structure prediction of biomolecular interactions with AlphaFold 3,” Nature 630  (2024).

    View in Article
  12. Nature, “Nobel prize in chemistry 2024,” Oct. 9, 2024.

    View in Article
  13. National Institutes of Health, “NIH-developed AI algorithm matches potential volunteers to clinical trials,” press release, Nov. 18, 2024.

    View in Article
  14. ICF, “FDA applies machine learning to streamline drug safety reviews,” accessed March 6, 2025. 

    View in Article
  15. National Institutes of Health, “NIH researchers develop AI tool with potential to more precisely match cancer drugs to patients,” press release, April 18, 2024. 

    View in Article
  16. US Department of Health and Human Services, “Strategic plan for the use of artificial intelligence in health, human services, and public health,” January 2025.

    View in Article
  17. Deloitte, “How AI can help hospitals strengthen their financial performance and reduce clinician burnout,” accessed June 2, 2025.

    View in Article
  18. FDA Today, “Medtronic Broadens GI Genius Reach with Fresh VA Agreement,” Dec. 2, 2024.

    View in Article
  19. Ibid.

    View in Article
  20. Grace Dille, “VA working to reduce administrative burden with AI pilots,” MeriTalk, May 29, 2024.

    View in Article
  21. Ibid.

    View in Article
  22. Ibid.

    View in Article
  23. Tim Small, Laura Baker, Alison Muckle Egizi, Ipshita Sinha, Paige Leiser, and Joe VerValin, “Health care innovation hubs: A catalyst for technology adoption across federal health,” Deloitte Insights, Aug. 27, 2024.

    View in Article
  24. Margaret Chustecki, “Benefits and risks of AI in health care: Narrative review,” Interactive Journal of Medical Research 13, Nov. 18, 2024. 

    View in Article
  25. US Government Accountability Office, “Science & tech spotlight: Generative AI in health care,” Sept. 9, 2024. 

    View in Article
  26. Keith Feldman, Reid A. Johnson, and Nitesh V. Chawla, “The state of data in healthcare: Path towards standardization,” Journal of Healthcare Informatics Research 2, May 22, 2018. 

    View in Article
  27. Amir Torab-Miandoab, Taha Samad-Soltani, Ahmadreza Jodati, and Peyman Rezaei-Hachesu, “Interoperability of heterogeneous health information systems: A systematic literature review,” BMC Medical Informatics and Decision Making 23, no. 18 (2023). 

    View in Article
  28. Joshua R. Vest and Larry D. Gamm, “Health information exchange: Persistent challenges and new strategies,” Journal of the American Medical Informatics Association 17, no. 3 (2010): pp. 288–294.

    View in Article
  29. Natalia Norori, Qiyang Hu, Florence Marcelle Aellen, Francesca Dalia Faraci, and Athina Tzovara “Addressing bias in big data and AI for health care: A call for open science,” Patterns 2, no. 10 (2021). 

    View in Article
  30. US Department of Health and Human Services, “HealthData.gov,” accessed March 7, 2025. 

    View in Article
  31. US Department of Commerce, “U.S. AI Safety Institute establishes new U.S. government taskforce to collaborate on research and testing of AI models to manage national security capabilities & risks,” press release, Nov. 20, 2024.

    View in Article
  32. T. V. Nguyen et al., “Efficient automated error detection in medical data using deep-learning and label-clustering,” Scientific Reports 13, Nov. 9, 2023.

    View in Article
  33. Phoebe Clark, Eric K. Oermann, Dinah Chen, and Lama A. Al-Aswad, “Federated AI, current state, and future potential,” Asia-Pacific Journal of Ophthalmology 12, no.3 (2023): pp. 310–314. 

    View in Article
  34. Jill S. Barnholtz-Sloan, “Federated learning—a solution for democratizing data for cancer research?Cancer Data Science Pulse, Jan. 31, 2023.

    View in Article
  35. Bonnie Kaplan, “Seeing through health information technology: The need for transparency in software, algorithms, data privacy, and regulation,” Journal of Law and the Biosciences 7, no. 1 (2020).

    View in Article
  36. Heidi Vella, “Leaders divided on level of governance needed for responsible AI,” AI Business, Dec. 9, 2024.

    View in Article
  37. Rowan, Ammanath, Perricos, Sniderman, and Jarvis, “Deloitte’s state of generative AI in the enterprise quarter three report.”

    View in Article
  38. David B. Isaacks and Andrew A. Borkowski, “Implementing trustworthy AI in VA high reliability health care organizations,” Federal Practitioner 41, no. 2 (2024).

    View in Article
  39. Zero Trust Data Security Working Group, “Federal Zero Trust Data Security Guide,” October 2024.

    View in Article
  40. Tom Wheeler, “The three challenges of AI regulation,” Brookings, June 15, 2023.

    View in Article
  41. Naomi Haefner, Vinit Parida, Oliver Gassmann, and Joakim Wincent, “Implementing and scaling artificial intelligence: A review, framework, and research agenda,” Technological Forecasting and Social Change 197, December 2023.

    View in Article
  42. Tradewinds, “Your marketplace for ai/ml, digital and data analytics solutions,” accessed June 4, 2025.

    View in Article
  43. Enlli Lewis, “From strategy to impact: Establishing an AI corps to accelerate HHS transformation,” Federation of American Scientists, Dec. 23, 2024.

    View in Article
  44. Bethany Abbate, “Chief AI officers must be preserved in the Trump administration,” FedScoop, Feb. 18, 2025.

    View in Article
  45. Hugh Gamble, “Unlocking efficiency in the government through AI agent solutions,” Politico, March 4, 2025.

    View in Article
  46. Deloitte analysis of Fedscope and Department of Labor’s O*Net database to identify potential of gen AI automation in federal agencies. Tasha Austin, Joe Mariani, Thirumalai Kannan, and Pankaj Kishnani, “Generative AI for government work tasks,” Deloitte Insights, April 25, 2024.

    View in Article
  47. Deloitte, “From code to cure, how generative AI can reshape the health frontier: Unlocking new levels of efficiency,” 2023.

    View in Article
  48. Renee Yao, “Nvidia works with Deloitte to deploy digital AI agents for healthcare,” Nvidia, Oct. 21, 2024. 

    View in Article
  49. Advanced Research Projects Agency for Health, “RFI: Agentic artificial intelligence systems,” Oct. 18, 2024.

    View in Article
  50. Ibid. 

    View in Article
  51. US Food and Drug Administration, “FDA announces completion of first AI-assisted scientific review pilot and aggressive agency-wide ai rollout timeline,” press release, May 8, 2025; National Institutes of Health, “NIH researchers develop AI tool with potential to more precisely match cancer drugs to patients,” press release, April 18, 2024; Department of Veterans Affairs, “AI use case inventory,” accessed May 15, 2025.

    View in Article

Acknowledgments

The authors would like to thank William D. Eggers, Prashanth Prasanna, John Zachary, Amrita Datar, and Michael Greene for the insights and support provided in publishing this article.

They would also like to express their thanks to Dilip Philipose, Rashmi Mathur, Kevin Footer, Natalie Young, Eric White, Ashton Christina Astbury, Sofia McKewen Moreno, Jeffrey Williams, Darren Schneider, and Erin Malone for providing their thoughtful feedback and suggestions at critical junctures.

Cover image by: Jim Slatton