For decades, corporate org charts operated on a simple principle: Place humans at every decision juncture, and they will make the consequential choices. The simplicity of that approach is about to get more complicated.
Businesses no longer merely automate tasks. They are deploying work systems in which humans and agents operate side by side, deputizing digital agents to make decisions and act on them. Gartner says that by 2028, at least 15% of daily work decisions will be made by digital colleagues,1 raising questions about how prepared executives may be to design systems for multiple agents working together.
To get a sense of the potential implications, consider the concept of a dark factory. It’s a manufacturing facility where robotics and machinery operate so autonomously that lighting isn’t required. All the work occurs unseen by human eyes, until there’s an issue that a human needs to address.
As these multi-agent systems emerge—independently working on specific tasks and solving discrete problems—leaders may need to give more thought to issues such as workflows, governance, and decision rights. How will these agents work together? Who will be in charge? How many digital agents can a human manage and be accountable for? The dark factory concept is just one of many plausible scenarios that could help leaders get a sense of what it could mean for multi-agent systems to perform knowledge work.
Early adopters are finding that bolting autonomous agents onto operating models designed for human workers is like fitting a jet engine to a bicycle. According to Deloitte’s Tech Trends 2026, many companies are automating existing processes designed for human workers without rethinking how work should be done.
Deloitte’s State of AI in the Enterprise 2026 survey finds that 84% of companies haven’t redesigned jobs to fit AI, even though automation expectations are high. The main obstacle, according to the executives surveyed, is a lack of worker skills, yet less than half of the survey respondents report that their organizations are changing their talent strategies thus far.
Consider some of the obstacles companies might face as they attempt to scale AI agents within traditional operating models. Traditional operating models were engineered for control, with stable processes, clear handoffs, and predictable outcomes. Agentic AI doesn't work that way. These systems make autonomous decisions, learn as they go, and produce results you can't always anticipate.
In addition, agents are neither capital nor labor. They act like workers but are funded like technology, creating governance gaps. Ownership can become muddled, especially with respect to decision rights, risk and liability, quality assurance, and performance accountability. The lack of clarity can be a barrier to scaling.
There’s also a risk of layering agents onto broken processes. Doing so doesn’t fix those processes; instead, it amplifies challenges.
Perhaps most importantly, as agentic systems take on more work, leaders will need to define new roles for humans.
In the multi-agent environment that’s taking shape, what will likely differentiate high-performing enterprises? And what should leaders redesign now?
Deloitte’s Center for Integrated Research is studying this issue, and early indicators suggest high-performing enterprises won’t treat agentic AI and multi-agent systems as a software rollout. Technology’s integration into the workforce introduces fundamental uncertainties around decision-making, accountability, and workforce dynamics that organizations need to address proactively.
Leaders should take a step back and begin thinking through some of these fundamental issues in the present, despite uncertainty about how these models will take shape in the near term. With no clearly predictable future on the horizon, leaders can focus on planning for multiple plausible scenarios of what multi-agent systems might look like and how organizations could evolve. Here are some of the questions they will likely need to answer:
As AI agents take more real-world actions, performance evaluation may need to expand beyond model accuracy. Leaders need to think about measuring how agents affect end-to-end outcomes across the enterprise and beyond to include customers, partners, and competitors. That likely means logging and reviewing agent activity, auditing behaviors, documenting rationales and human interactions, and tracking disagreements. The goal would be to judge whether agents improve outcomes people care about, like speed, throughput, and decision quality.
Leaders should prepare for roles emphasizing judgment, investigation, and intervention—especially in edge cases—while helping people maintain the operational context to spot when agents are wrong. This suggests a greater focus on oversight and accountability, process redesign, and effective collaboration with digital colleagues.
As agents perform routine work, leaders might want to consider making human differentiation a design requirement—rewriting roles and performance measures around judgment, collaboration, innovation under ambiguity, and trust, so people aren’t left as the default catch-all for approvals and blame.
Agentic AI is beginning to reshape workforce operating models now, even before organizational design, controls, and talent models have shifted. The next several months matter. They’re a critical window of time in which leaders can think through some of the implications of multi-agent systems and establish guardrails before they begin to scale.