Skip to main content

61% of health nonprofits use gen AI, and trustworthy scaling is next

By Jay Bhatt, D.O., managing director, and Mani Keita Fakeye, Ph.D., research manager, the Deloitte Center for Health Solutions, Deloitte Services LP, and Gizelle Gopez, MPH, manager in program evaluation, Deloitte Consulting LLP

More than 60% of US nonprofits and community-based organizations that are focused on health are actively experimenting with generative artificial intelligence (gen AI), according to a recent survey of executives conducted by the Deloitte Center for Health Solutions. While nearly three in four executives from nonprofit organizations report that gen AI has already delivered a “moderate” to “great” impact on reducing inefficiencies, only 1% have scaled the technology across their organizations. By contrast, nearly a third of surveyed corporate health care executives said their organizations are operating AI at scale (see the 2026 US Health Care Outlook). This gap between experimentation and full integration underscores both a challenge and a significant opportunity.

While many nonprofits are testing AI use cases, they may lack the frameworks, funding, and confidence needed to scale responsibly. A thoughtful approach grounded in mission, trust, and organizational readiness can help nonprofits translate early success into sustainable impact. The Deloitte GenAI Maturity Model for Health Nonprofits can help leaders strengthen their governance, prioritize smart high-value AI investments, and build workforce capabilities, driving greater reach and impact for the communities they serve.

Five stages of gen AI maturity for health nonprofits
Deloitte has identified five distinct stages of gen AI maturity beginning with simple awareness and progressing to continuous innovation. Understanding these stages helps nonprofits—and all organizations—benchmark their current capabilities, identify gaps, and plan targeted actions for advancing their AI journey.

1. No adoption: Awareness without action
Organizations at this stage recognize the potential of gen AI but have not yet begun to explore it. Our survey found that 23% of nonprofits plan to explore AI within the next 12 months, while 3% have no plans to engage at all. The data suggests AI adoption among nonprofits is lagging compared to health care organization, as noted above. Resources and prioritization may be delaying progress along the maturity model.

Key questions for nonprofits:

  • Who will champion our first steps and drive follow-through to scale?
  • What is our baseline readiness, including likely use cases and obstacles?

Next steps:

  • Establish executive sponsorship and identify internal AI champions.
  • Conduct a gen AI readiness assessment to understand technical and cultural foundations.

2. Experimentation: Testing in silos
While most survey respondents said they are experimenting with gen AI, only 30% have conducted structured pilot projects or proofs of concept. For example, a pilot using gen AI to triage service requests could uncover gaps in data quality and workflow integration—insights that can refine both the technology and internal organizational processes before wider adoption. Without these structured pilots, many nonprofits risk stalling at the experimentation phase—missing critical lessons, stakeholder buy-in, and the confidence to scale AI responsibly.

Key questions for nonprofits:

  • How are lessons and insights shared beyond individual teams or silos?
  • How are pilots being designed with ethics and trust in mind?

Next steps:

  • Define clear and measurable goals for each pilot.
  • Engage stakeholders early to identify high-impact opportunities and ensure alignment with the organization’s mission.

3. Foundational readiness: Strategy and structure
At this level, nonprofits develop a clear mission-aligned AI strategy that directly supports organizational priorities. For instance, a group focused on reducing client wait times might target administrative processes where AI can accelerate service delivery. Governance policies and protocols are documented. Resources have also been invested in platforms, systems, and workforce skills to support responsible AI adoption.

Key questions for nonprofits:

  • Who is accountable for overseeing AI risk, ethics, and responsible use?
  • How does our governance build trust with staff, volunteers, and the communities we serve?

Next steps:

  • Document governance protocols that reinforce trust.
  • Invest in staff education and skill-building to increase confidence, competency, and comfort with AI tools.

4. Integrated application: Scaling across the enterprise
Organizations here are shifting from strategy to execution. Gen AI becomes a shared tool across teams—from operations and communications to client engagement.

Key questions for nonprofits:

  • How are we tracking and sharing productivity gains?
  • How are we ensuring fairness and transparency?

Next steps:

  • Monitor usage and outcomes to refine governance policies.
  • Personalize training by role and function to deepen adoption.

5. Advanced maturity: Continuous innovation
At the highest level of maturity, gen AI is fully embedded into organizational operations, culture, and decision-making. Organizations at this stage proactively adapt to evolving risks, regulations, and technologies. For example, organizations may begin exploring agentic AI to anticipate emerging needs and to stay ahead of the innovation curve.

Key questions for nonprofits:

  • How are we demonstrating leadership in trustworthy AI practices?
  • How is staff engaged in advancing safety, learning, and opportunity?

Next steps:

  • Dedicate resources to stay informed on emerging global AI standards and leading practices.
  • Regularly update governance and retrain staff as new tools and use cases emerge.

Building trustworthy AI from strategy to service
Trust is a cornerstone of nonprofit work—and AI is no exception. Deloitte’s Trustworthy AI Frameworki outlines six principles to guide responsible adoption:

  1. Safe and secure: Protect people and systems from harm.
  2. Private: Respect confidentiality and sensitive data.
  3. Transparent and explainable: Explain decisions and outputs clearly.
  4. Fair and impartial: Avoid amplifying bias.
  5. Responsible and accountable: Operate in a socially responsible way with clear ownership for AI outcomes.
  6. Robust and reliable: Ensure systems perform consistently and ethically under pressure.

In our conversations with nonprofit leaders, we found that trust manifests differently across organizations. Village Capital, an organization that helps start-ups raise investment capital, supports early-stage social entrepreneursii and has made responsible AI an ongoing accountability practice while navigating Foundational Readiness. It integrates trust principles throughout its AI strategy, helping to ensure that ethics are not an afterthought. Lemontreeiii, an organization that addresses food insecurity, applies gen Al to securely summarize trusted client histories, enabling its team to deliver personalized support when helping clients navigate the complex system of food assistance. This demonstrates how the trustworthy and capable use of AI can deepen impact and client relationships. Fast Forward, an organization focused on accelerating tech nonprofits, is reimagining philanthropy to lead in the age of AI. By resourcing early-stage development, prioritizing responsible use, and backing collaboration, funders can equip nonprofits to adopt AI responsibly.iv

Empowering people
Gen AI has the capability to amplify human potential. When used effectively, AI acts as a force multiplier, automating repetitive time-consuming tasks so staff can devote more energy to other aspects of the organization: relationships, strategy, and mission delivery. Think of how digital tools have reduced paperwork for case managers or optimized volunteer matching. AI can extend that transformation across nonprofit operations. AI should make work more human, more meaningful, more rewarding or fulfilling, and help organizations meet their mission more effectively.

Bridging the gap: turning ambition into action
Moving from pilots to scalable impact involves deliberate alignment across strategy, capacity, and culture. To help accelerate maturity responsibly, nonprofit leaders can:

  • Engage funders and boards early and communicate AI readiness and ethical commitments.
  • Integrate AI into performance metrics to demonstrate tangible impact.
  • Reinforce transparency and community trust through responsible data practices.
  • Invest in leadership and governance training to strengthen oversight.
  • Evolve governance frameworks to keep pace with innovation and regulation.

A next frontier in AI isn’t just technical—it’s governance innovation. The nonprofits that thrive won’t necessarily be those that use the most AI, but those that use it with the clearest purpose and strongest alignment to mission and values.

The bottom line
Generative AI can be a powerful bridge for nonprofits—expanding their impact, strengthening their reach, and unlocking new possibilities. But its full potential is realized only when adoption is grounded in trust, a clear maturity model, and purposeful stewardship. Organizations that cross this bridge thoughtfully will be the ones who redefine what’s possible for their missions and the communities they serve.

Acknowledgements: Thank you to Sterling Clemmons, Sadie Bazur-Leidy, Rhonda Evans, Tatiana Blue Dixon, Aaron Landrum, Brett Tolman, Pavan Bhoslay Kumar, Nivedha Subburaman, David Levin, Prad Prasoon, Anna Alcaro, and Caroline Zuchold for their contributions to this work.

Latest news from @DeloitteHealth

Endnotes

‘Trustworthy AI’ is a framework to help manage unique risk, Deloitte and MIT Technology Review, March 25, 2020
ii Village Capital, Village Capital, 2025
iii Lemontree, Lemontree, 2025
iv The Philanthropic Reset: How Philanthropy Can Lead in the Age of AI, Fast 
Forward & Google.org, 2025

This publication contains general information only, and Deloitte is not, by means of this publication, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This publication is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional advisor.

Deloitte shall not be responsible for any loss sustained by any person who relies on this publication.

Return to the Health Forward home page to discover more insights from our leaders.

Subscribe to the Health Forward blog via email