Skip to main content
Perspective:

Agentic AI – lessons from the real world

The shift to Agentic AI

The era of experimentation with AI is over. The people and organisations that succeed are those that treat AI as a disciplined, enterprise-wide transformation. 

The conversation has changed from “What can AI generate?” to “What can AI execute, orchestrate, and automate?”

This shift raises the stakes and makes execution more complex. It is also why value capture, not experimentation, must be the anchor. 

The insights in this paper come from real implementation work: designing, deploying, and scaling AI agents in complex government and business environments, from high-volume frontline operations, to business-critical decision processes. 

These lessons reflect what works, what fails, and what organisations must get right to unlock value at scale.  

The promise of Agentic AI

The world has quietly but decisively moved past the GenAI “use case era.” Early adoption focused on chatbots, summarisation tools, and productivity hacks — useful, but limited. These were AI assistants: reactive, single-turn, and dependent on human prompting to act and provide context.

Today, organisations are confronting a very different frontier: agentic AI.

  • Where GenAI answered questions, agentic AI gets things done.
  • Where GenAI produced content, agentic AI takes actions.
  • Where GenAI worked in isolation, agentic AI operates within workflows, systems, and teams.

As a result, Agentic AI provides opportunities to address meaningful problems and drive sustained productivity improvements – an increasingly urgent goal for most organisations and economies.  Yet, many organisations find it difficult to move beyond pilots and proof-of-concepts to deliver Agentic AI benefits at scale. 

In our experience, success comes down to approaching Agentic AI in a disciplined and structured way - getting beyond the hype and promise of quick wins.  The following lessons summarise some of our key ‘warts and all’ learnings gained through implementing successful Agentic AI systems with users on the ground.

After working with Agentic AI in the real world, one hard truth stands out - nothing scales without trust. You can build the smartest agent in the world, but if people do not trust how it behaves, it will never move beyond a pilot. Hence, trust is not just an objective or a success factor, it has to be viewed as a non-negotiable. 

Transparency, Accuracy and Data Quality

Trust begins with de-mystifying the AI agent. 

Teams want to know exactly what an agent does, when it acts on its own, when it needs human review, and when the decision stays fully human. Clear decision boundaries between humans and agents allow teams to stay in control. 

The concept of control is central to trust and is driven by the way agents are designed.  Users want clarity on how an agent arrived at its output and whether it did so fairly.  User control is based on visible reasoning - from being able to easily access and interrogate reasoning logs and specific references. That is why user experience and interface design is much more than a cosmetic detail for AI applications, it is critical for ensuring humans feel engaged, responsible and in control.

Trust deepens when people consistently see accurate results. 

Inconsistency fuels concerns about accuracy and hallucinations. These concerns are valid and need to be addressed deliberately right from the start of the design, not as an afterthought. Hallucinations take different forms. Sometimes the agent produces information that is simply incorrect. At other times, it relies on outdated knowledge or interprets information out of context. This is why grounding an agent in organisation-specific context is so important. 

Perfect data is unrealistic; prioritization matters

With agentic AI, outputs are only as good as the inputs. Reliable enterprise knowledge sources matter, which means data preparation, organisation, security, and quality need to be critical elements of any agentic AI implementation.

However, it is unrealistic to expect organisations to start with perfect data. Most enterprises are built over time, with information scattered across systems and of uneven quality. The practical approach is to segment and prioritise what matters most. Tag high quality information, guide the agent to draw from trusted sources, and structure retrieval so the right content is surfaced at the right time. Techniques such as narrowing data scope and breaking information into clear, manageable sections improve retrieval clarity and accuracy. Data quality improves iteratively, and so does AI performance. This is how accuracy is built, step by step, rather than tackled all upfront.

Testing, Observing, Auditing

Operationally, sustained accuracy comes from disciplined testing and evaluation of agents across messy, real-world scenarios, not just ‘happy’ paths. This means being explicit about what good looks like, defining clear expectations upfront and anchoring them in a representative set of historical cases that reflect the standard of output the team/ organisation expects. This also includes running agents against edge cases, assessing how they behave as context changes, and checking whether outputs remain consistent over time. 

Being cautious with automated evaluations is important as agents may make subtle errors. Hence, human feedback loops are essential in this process. Users and supervisors need simple ways to flag issues, correct outputs, and reinforce what “good” looks like. 

These signals, combined with observability and evaluation checkpoints built into the workflow, allow teams to catch drift early and continuously improve reliability. Audit logs then close the loop, providing traceability across the end-to-end process so teams can understand what happened, why it happened, and how to improve outcomes. This combination of testing, human feedback, and visibility is what turns early accuracy into sustained accuracy at scale.

Across deployments, we see one pattern consistently repeat itself: agentic AI and workflows amplifies the process it enters. It does not fix a broken one. The real work begins long before the agent is built. It begins with understanding where value is created and how the work should flow in the future. 

And if the process is unclear – for example rules sit in background documents or in the heads of experienced staff - do not shy away from tackling it. Take the opportunity to fix the process and focus on the basics of process engineering.  A good business analyst can map workflows, uncover the real rules, remove unnecessary steps, and shape the business case for value.  

Where does agentic AI best deliver value? 

In choosing where to start, certain patterns appear again and again. Agentic AI delivers the most value in workflows that are regular and repeatable with clear steps and established rules. These are often high-volume or high-friction tasks that absorb significant time. The outcome is predictable, but the path to get there is slow and manual. 

When improving turnaround time, accuracy, or consistency will create tangible value, these workflows become strong starting points. Importantly, the right agentic AI use cases will also involve human review and oversight at specific points in the process, which makes them well suited for a human-in-the-loop design. 

The strongest candidates for deriving meaningful Agentic AI value include: 

  1. Multi-step administrative tasks such as case intake, triage, form preparation, submissions, and compliance checks. 
  2. Information retrieval and synthesis where staff spend time locating, verifying, and summarising information for assessments or reports. 
  3. Coordinated workflow actions that span multiple systems or hand-offs, including scheduling, routing, approvals, escalations, and closing the loop on follow-ups. 
  4. Decision support and validation where rules and thresholds exist but still require interpretation, such as eligibility checks, risk assessments, and policy application. 

Agentic AI delivers the greatest impact in high-volume, rule-driven workflows with opportunities for human-in-the-loop review. When workflows and user experience are well-designed, agents become tangible accelerators to capture value.

AI agents can automate many tasks, but they do not replace the human role.  Instead, they elevate the role of human expertise and judgment. Understanding how humans and agents interact is an important component of workflow design.  

There is a need to understand the broader team and workforce impact of introducing AI agents.  For example, one clear pattern that has emerged is how GenAI has accelerated the ability of junior staff to contribute meaningful outputs much earlier in their careers. Work that once took two or three formative years to master, can now be produced in a fraction of the time.  

While this acceleration brings real benefits, it also introduces a new concern. As the time to produce outputs shortens, the opportunity to deeply internalise the first principles behind the work also shrinks. Some core foundational skills are now implicitly delegated to the agent. This places greater responsibility on leaders to ensure that speed does not undermine professional judgment, craft, or long-term capability development

Complacency is a genuine risk, particularly for individuals who are time-pressed or have high trust in technology solutions. When an agent performs consistently well, it is easy for teams to assume it will always get things right. But AI agents can make mistakes and can vary outputs over time, so humans must continue to check, review, and intervene when needed. Like with junior staff members, active oversight is an important part of the job and needs to sit at the core of agentic workflow design.

Successful agentic AI initiatives share two fundamentals. They start with a clear choice of the right problem to solve, and they invest the time upfront to design the processes, workflows and human roles properly. There is often pressure to move quickly and jump straight into building, but successful deployments show that getting these foundations right is what gives agentic AI the best chance of delivering sustained value.

After a year of deployments, one reflection is clear - AI should no longer be considered experimental. We see many organisations continuing to run experiments without building enterprise-level capability.. The time spent experimenting has delivered insights but not sustained impact. The reality is that Agentic AI is already proving itself as a dependable utility when designed and governed properly. Organisations need to approach Agentic AI in terms of building sustained and enduring capability.

Scaling AI demands a new operating discipline 

A lesson from real world deployments is that scaling AI is not simply a technical upgrade. It is a shift in how the business operates. Organisations that want to move beyond pilots must build with scale in mind from the beginning. 

Build with a clear end goal. Scaling only works when everyone understands what the business is trying to achieve. The goal should not be to automate everything. It should be to focus on the common and repeatable use cases that create real value.  

Treat organisational context as a business asset. Good AI output comes from grounding the system in the organisation’s own knowledge. Policies, documents, data, and day-to-day operations all shape output quality. 

Design for the enterprise, not one team. AI only scales when the foundation can support many workflows across the organisation. A shared enterprise architecture and platform allows new use cases to be added quickly without rebuilding from scratch. This is where true organisational momentum starts to build. 

Stay flexible with models. Models will continue to evolve. Waiting for the perfect model is a long game that will result in missed opportunities. Rather, successful organisations find a fit-for-purpose model and build a model-agnostic platform for ease of adaption and to avoid future constraints. 

Scaling AI requires clarity, discipline, and an enterprise mindset

The key lesson is that organisations that are successfully implementing agentic AI are playing the long game and understanding that this technology, while still relatively new, will inevitably be an integral part of business operations in the future.  As a result, they are thinking about scale, sustainability and enterprise deployment from the beginning rather than allowing a proliferation of interesting, but ultimately wasted, pilots and proof of concepts. 

A clear lesson from organisations deploying Agentic AI programmes is to kick-start the momentum with the first 5% or 1-3 anchor use cases. The goal is to show what AI can do in a visible and practical way. These early use cases are not pilots or experiments; they are strategic anchors selected for their ability to demonstrate meaningful value quickly and build the momentum required for enterprise-level adoption. When early wins remove friction or improve quality, it unlocks belief across the organisation. Teams begin to spot new opportunities on their own and the conversation shifts from “Why AI?” to “Where next?”.  

Reimagining the Future of Work

Real change begins when people are given permission   to reimagine how work could be done differently. One of the most effective and repeatable ways to create that shift is to invest time in co-designing the future of operations with the teams who do the work every day.

Bringing teams into a focused innovation workshop creates space to step away from current constraints and think more openly about what could be improved. The intent is not to solve everything at once, but to surface where work feels heavy, repetitive, or low- value. Using a simple start, stop, continue framework helps structure these conversations. Teams can identify activities that no longer need to be done, explore new ways of working, and pinpoint where AI could meaningfully support better outcomes.

The key to making these sessions work is leadership behaviour. In one instance , a senior leader made a conscious decision to step out of the co-design session, so that team could speak candidly and think more boldly without feeling like they were criticizing management. What mattered most happened afterward. The leader rejoined the session to listen carefully, ask thoughtful questions, and openly support the ideas that emerged from the ground. That visible sponsorship sent a powerful signal of trust and gave the team confidence that their thinking would shape the future.

Involving teams in the redesign of work turns change into something people help create rather than something imposed on them. When teams feel heard and supported, AI stops being an abstract initiative and becomes a practical enabler tied directly to organisational goals. 

Show, not tell

Another technique that consistently works is simple: show, not tell. A live agent completing a task in seconds does more to change minds than any presentation or slide deck ever could. A good demo showcases the agent’s operational logic in action, revealing how thoughtful design and grounding create dependable outcomes. 

This is why many co-design and innovation workshops are far more effective when paired with prototyping. Prototypes don’t need to be perfect. Low-fidelity screens on key workflows that “move the needle” are often enough to bring ideas to life. The intent here to enable people to see a future way of working, and in doing so, conversations shift from scepticism to possibility.

Learnings from on the ground show that tipping points are not technical, they are psychological. “Showing” creates alignment faster than “telling”. It shortens decision cycles, builds confidence, and helps leaders and teams rally around a shared vision of what is possible. More importantly, it helps shift AI from an abstract concept into something tangible that people can relate to and support. Once people see AI working in their world, the rest of the enterprise begins to move with it.  

Conclusion: From value to reality 

Agentic AI is reshaping how organisations think, operate, and serve. The challenge is sustained execution at scale.

Leaders must make deliberate choices about ownership, strategy, trust, operating model, and readiness. AI is moving quickly, and indecision is also a decision. You can wait and observe, or you can move with focus and purpose guided by real business cases, value clarity, and scalable foundations. 

We need to shift the mindset from managing AI risk to embracing AI opportunity, from AI pilots to enterprise agentic AI systems, and from technical solutions to those that have human engagement and trust at the core.

Success belongs to leaders who pair ambition with operational discipline and recognize that the real power of AI lies in the way work is redesigned.

Authors

Leong Si Ngah
Technology & Transformation Partner, Deloitte Southeast Asia

Chris Lewin
AI & Data Leader, Deloitte Asia Pacific

Adam Powick
Senior Advisor, Deloitte Southeast Asia

Did you find this useful?

Thanks for your feedback