Skip to main content

Rethinking alternative investment operations

Agentic AI and the path to smart, autonomous workflows

Authors:

  • Thibault Chollet | Partner, Alternatives
  • Piotr Zatorski | Senior Manager, Alternatives
  • Gerard O’Mahony | Analyst, Alternatives

This podcast episode is based on the Deloitte Luxembourg article below and includes content generated, assisted, or edited using artificial intelligence technology. It has been reviewed by a human prior to publication. The voices featured are synthetic. This podcast is provided for general information purposes only and does not constitute any kind of professional advice rendered by Deloitte Luxembourg. Deloitte Luxembourg accepts no liability for any loss or damage whatsoever sustained by any person who uses or relies on the content of this podcast. 

Alternative investment managers face a defining choice: absorb growing operational complexity through headcount or fundamentally redesign how work gets done. Unlike other financial sectors, alternatives aren't constrained by legacy systems. This creates a rare opportunity to build operating models around agentic artificial intelligence (AI) from the ground up, rather than retrofitting automation onto broken processes.

This article shows how:

  • Agentic AI differs from core system implementation or simple task automation by coordinating work across fragmented systems and pursuing objectives;
  • Architecture, governance, and trust frameworks are crucial to using AI potential to achieve controlled, scalable operations; and
  • Chief operating officers (COOs) and chief information officers (CIOs) can chart a pragmatic path from contained pilots to enterprise-wide transformation.

Introduction

Private markets are scaling differently from traditional asset classes. The assets under management (AUM) of global alternatives have grown steadily over the past decade and are expected to reach US$32 trillion globally by 2030.1

The visible strain on operations stems directly from how that growth manifests. Longer holding periods extend oversight requirements, while complex fund structures often defy standardization. Additionally, jurisdiction-specific regulatory overlays multiply compliance workloads. This pressure is further intensified by rising investor expectations for transparency, risk oversight, and environmental, social and governance (ESG) reporting.

Many firms have absorbed growth through headcount expansion, resulting in fragmented information, knowledge trapped in silos, and deteriorating visibility. For mid-size managers, every new fund adds operational overhead precisely when speed to market matters most. Complexity in alternatives isn’t cyclical; it’s structural.

Specific operational pain points include:
  • Scalability challenges from manual workflows and document‑heavy processes;
  • Siloed teams and increasing coordination overhead as strategies and jurisdictions expand;
  • Fragmented data across portfolio companies, administrators, and internal systems;
  • Slower speed to market due to operational teams struggling to match deal flow;
  • High exception handling burden triggered by bespoke structures and regulatory variability; and
  • Talent shortages in specialized roles such as risk and compliance.

Unlike traditional financial services like banking and insurance, alternatives aren’t bound by decades of legacy systems. This freedom enables a different architectural trajectory, where technology reshapes work rather than merely automating fragments of legacy processes.

This is where agentic AI changes the equation, offering a way to manage complexity as a system. By allowing agents to operate across data sources, applications, and workflows, firms can scale without proportionally increasing the operational burden. The leap forward isn’t technological sophistication, but the ability to operate differently at scale.

From automation to agency: A structural shift in how work gets done

Most AI discussions in asset management focus on efficiency gains: faster document review, improved data extraction, and quicker analysis. While these evolutionary improvements matter, agentic AI represents a foundational shift.

Consider the impact of electricity on factory design. Early adopters simply changed steam engines for electric motors without changing the layout, much like firms today use AI to automate isolated manual tasks by replicating them in an automated manner. True transformation only occurred when factories were redesigned from the ground up to leverage continuous power.

AI has reached a similar inflection point. Instead of retrofitting technology onto existing workflows, we must ask: How would we build our operating model differently if agentic AI were the foundation from the start?

Agentic systems perceive information, reason about objectives, and take actions across tools. Because they adapt based on outcomes, these systems pursue goals rather than static tasks. They coordinate with other systems and agents, escalating only when human judgment is required.

Technology can solve the specific operational gaps found across various sectors:

  • In real estate, key stakeholders like property managers and valuers often operate on incompatible systems;
  • In private equity, portfolio companies frequently follow divergent reporting and accounting standards;
  • In infrastructure, significant asset diversity prevents standardization; and
  • In private debt, borrower data arrives in fragmented formats across syndicate partners, which makes covenant monitoring and credit risk assessment coordination intensive.

By operating across these seams, agentic systems stitch these fragmented processes into continuous flows.

Importantly, autonomy doesn’t mean an absence of control. Agents operate within human-defined boundaries such as escalation rules, approval thresholds, and audit trails. They don’t replace humans; instead, they augment professionals by absorbing complexity so experts can focus on judgment and oversight.

Transforming operations: How agentic AI addresses alternatives' pain points

While 95% of fund managers use generative AI,2 this broad adoption only tells part of the story. Most firms are using the technology to automate isolated tasks rather than to fundamentally rethink their core operations. Currently, generative AI primarily augments individual day-to-day tasks in operations, risk, compliance, and reporting3.

The potential for agentic AI to transform workflows across functions is much higher, yet deployment remains low.4 The gap stems from operating model readiness and risk management discipline rather than technological maturity.

Take investor reporting as an example. Traditionally, a fund accountant pulls data, a compliance officer reviews regulatory requirements, an operations manager coordinates with administrators, and investor relations finalizes materials. The process relies on sequential handoffs, email updates, and ad hoc exception handling. When complexity increases due to more funds or regulations, firms typically add headcount at each stage to maintain throughput.

In an agentic operating model, an agent assumes a specific role rather than fulfilling a linear function. These systems interact dynamically across stakeholders and platforms to augment human capabilities. By orchestrating data gathering and applying jurisdiction-specific regulatory rules, the agent can assemble draft reports in investor-specific formats autonomously.

Humans retain ownership of outcomes and focus on high-value judgment while the agent handles low-value tasks. For a head of investor relations, the shift is from production coordinator to strategic adviser, anticipating investor needs rather than chasing inputs.

Enlarge image

Strategy, architecture and trust: The path to scale

Agentic AI doesn’t compensate for weak foundations. Before autonomy is feasible, firms must establish a viable operating environment built on disciplined ground. It’s important to consider:

  • A curated data layer that exposes enterprise data and documents securely and at scale;
  • Application programming interface (API)‑based system integration that removes manual entry bottlenecks by exposing functionality programmatically;
  • A model layer that houses LLMs and prompt libraries with versioning, monitoring, and rollback controls;
  • A tool layer that exposes core applications for agent use; and
  • An orchestration layer that manages agent coordination, routing, escalation, and exceptions.

Consider a quarterly reporting cycle where data arrives from 30 portfolio companies in disparate formats. Without coherent architecture, silos constrain the capabilities of AI. However, once proper foundations are in place, agents can orchestrate and execute the entire flow autonomously. These tools detect anomalies, cross-reference patterns, enrich data from internal systems, and escalate only true exceptions to a human.

Executives often perceive AI as an unobservable “black box”, citing privacy, security, and lack of control as the primary hurdles to adoption5. These concerns frequently arise from unfamiliarity rather than inherent technical limitation, as robust governance and control are achievable through deliberate design.

For example, an infrastructure manager can ensure covenant interpretation is transparent, by verifying governance is embedded into system architecture from the start.

Trust in AI won’t happen overnight. Firms must build confidence incrementally, starting with contained deployments. As governance frameworks mature and reliability is demonstrated, they can then expand autonomy.

Deloitte's Trustworthy AI framework translates this governance intent into concrete architectural requirements. It covers foundational elements of governance, controls, and regulatory compliance, alongside core dimensions including robustness, privacy, transparency, security, and accountability. These principles become operational through specific design choices:

  • Explainability requires logged reasoning paths;
  • Privacy requires data minimization by default; and
  • Robustness requires validation checkpoints before actions propagate.

When governance is embedded architecturally rather than enforced procedurally, control becomes stronger and more scalable.

Enlarge image

What this means for alternatives leaders

Leaders should treat this transformation as a structured, incremental journey. Success requires a clear roadmap to avoid fragmented experimentation that creates ungovernable AI sprawl.

  • Prioritize ruthlessly and iterate. Focus on high-value domains where coordination pain is highest.
  • Secure the right capabilities. Ensure you have sufficient internal capabilities or engage external specialists for functional expertise.
  • Manage change intentionally. Organizational readiness determines success as much as technical capability, so train people proactively as roles evolve.
  • Rethink rather than retrofit. Redesign work instead of applying AI to current processes.
  • Align AI performance with organizational structure. Success depends on governance, maturity and data discipline.
  • Evaluate pragmatic options in combination. Combine offshoring and agentic AI based on the complexity of the task.
  • Use a modular, reusable design. Build agents as components so they can be redeployed with minimal return on investment (ROI) as the system scales.

Discover our Future of Advice Blog Homepage

Did you find this useful?

Thanks for your feedback