A tech company launches an AI résumé screener meant to speed hiring, only to find it has been quietly learning past biases and rejecting qualified candidates. A retail service bot makes promises that the company doesn’t want to keep. Clinicians in a hospital lean on a condition-alert tool that speeds treatment but degrades their ability to spot nuances the model isn’t trained to detect. An industrial manufacturer puts AI on the board to surface risks; the directors learn it could be manipulated for personal agendas.
These stories are not science fiction. They can and are happening to organizations every day, and they raise important questions: What could have been done differently? How do organizations improve input and oversight when AI is involved in decisions? What is the best mix of AI and human in each decision, leveraging enough machine autonomy to improve speed, consistency, and scale while also maintaining sufficient human agency?
AI has the potential to transform human decision-making. However, organizations should first treat this process as a strategic discipline and then design human–machine decision-making relationships accordingly.
Do this well, and AI is more likely to sharpen human judgment, not crowd it out. Today’s cautionary tales can become tomorrow’s competitive advantage.
Leaders today face a torrent of choices in conditions that are noisier, faster, and riskier than ever. Dashboards multiply and data streams expand, but leaders rarely stop to question where that information comes from or whether they can trust it.1 In a 2023 Oracle study, 85% of business leaders regret or question decisions they had made in the past. Additionally, 72% of those leaders say the volume of data and their lack of trust in data has stopped them from making any decision at all.2
Many are turning to AI as a solution. In our 2026 Global Human Capital Trends survey, 60% of executives now regularly use AI to support their decisions. Gartner projects that by 2027, half of business decisions will be augmented or automated by AI agents.3 Even boards are beginning to use AI to inform decisions.4
But AI use in decisions may be racing ahead of organizational oversight. As a fundamentally new technology, AI brings distinct challenges.
Our survey data suggests that the issue of AI and decision-making is still emerging despite the risks: Nearly two-thirds (64%) of respondents consider it very important to their current success, and a similar number are taking steps to address it. However, only 5% consider themselves to be leading the way (figure 1).
As organizations expand AI-enabled decision-making, many find AI to be amplifying existing deficiencies instead of solving them. Deloitte’s High Impact Decision Intelligence research has found that high-quality decision-making is a discipline that can be learned, improved, and scaled. Yet, more than half of organizations in that study (57%) operate at low decision-making maturity, with few teaching decision skills or providing the necessary tools to support decision-making.13 High-maturity organizations are far more likely to do both and to make decision strategies explicit.14
AI is reshaping organizational decisions, whether organizations are ready or not. To strengthen decision quality and mitigate risks, organizations should first hone decision-making as a discrete capability and then intentionally design how humans and AI interact as deciders.
AI-enabled or not, organizations that practice decision-making as a rigorous discipline consistently outperform peers.15 Simply thinking about decisions as a capability is a good start.
Organizations can use decision frameworks to classify choices and pre-assign owners, data, guardrails, and speed for each category (figure 2). Amazon’s one-way vs. two-way door model provides a simple example: Decisions that are difficult or impossible to reverse—one-way doors—require methodical decision-making, while easily reversible decisions—two-way doors—can be made quickly.16 This framework helps teams make reversible bets quickly, while giving irreversible moves higher scrutiny.
Figure 2
${optionalText}
Most organizations still treat decisions as by-products of meetings and dashboards rather than worthy of explicit design and focus. To add rigor and integrity, consider the following:
Deloitte research in organization design found that a surprising number of organizations lack clarity about decision rights.18 AI will likely increase the pressure here and muddy decision rights further. To achieve clarity, organizations can take the following actions:
Atlassian, for example, recognized that unclear boundaries between AI-led and human-led decisions were creating bottlenecks. Instead of creating a rigid rulebook, they treat decision rights as something that evolves, regularly revisiting where AI should handle routine tasks and where humans need to step in for higher-risk calls. Teams gain a clear understanding of what’s automated and what requires human judgment. This transparent approach builds trust.20
Even the best decision processes can fail if the decision-makers—people and AI—aren’t prepared, supported, and evaluated. Organizations should design for the decision-making competence they need: building human decision-making skill and measuring AI performance with rigor befitting their mission-critical contributions.
There’s an irony: Many organizations teach AI how to decide while assuming humans already know how. These practices can help:
AI’s role in decision-making requires explicit evaluation, including quality criteria, regular retraining, and fit-for-risk oversight. This is not a new version of employee performance management. AI evaluation is a growing discipline requiring its own expertise and scale, which can be built through the following practices.
Human agency underpins the degree to which individuals feel influence and responsibility for events around them. People more readily accept responsibility when they feel real influence.33
Leaders should be intentional about supporting human agency and building trust as humans and machines interact to make decisions, including these practices:
Trust is an essential element of human collaboration. It is also essential for humans working with technology such as AI that interacts and evolves. Deloitte’s Trustworthy AI research shows that workers who trust the AI agents they work with are 10 times more likely to see those agents as critical to creating value.36
People extend trust to technology when it consistently demonstrates reliability, capability, transparency, and humanity.37 Our recommendations given in this chapter address these four factors and are likely to lead to greater trust.
Humans can confidently collaborate with AI and accept responsibility for the results when they know how the decision was made and how they materially influenced it. Trust rises when AI is used where people welcome it and is constrained where they don’t. Many users want AI to play some role in analytical, high-stakes domains (for example, fraud detection, weather forecasting, and drug discovery) but little or no role in more personal or value-laden decisions.38
Organizations that elevate decision-making as a discipline, improve decision-making skills, evaluate AI’s involvement in decisions, and design for human agency in decision-making can gain speed and quality without sacrificing trust. Those who don’t could risk opaque choices, diluted accountability, and the slow erosion of human agency at precisely the moment when clarity matters most.
Evidence suggests the upside is meaningful: Technology can accelerate analysis and clarify uncertainty, but it cannot replace human purpose, values, and judgment behind choices. This is the path to AI as a trusted adviser—improving the speed, scale, and quality of decisions while keeping humans firmly in charge of the “why.”
Deloitte’s 2026 Global Human Capital Trends worked in collaboration with Oxford Economics to survey more than 9,000 business and human resources leaders across many industries and sectors in 89 countries. In addition to the broad, global survey that provides the foundational data for the Global Human Capital Trends report, Deloitte supplemented its research with worker-, manager-, and executive-specific surveys to uncover where there may be gaps between leader and manager perception and worker realities. The survey data is complemented by more than 50 interviews with executives and subject matter experts from some of today's leading organizations. These insights helped shape the trends in this report.