Authors:
Niamh Geraghty: Partner, Audit & Assurance, Deloitte Ireland
Shu Ning Li: Director, Audit & Assurance, Deloitte Ireland
Colin Melody: Director, Technology & Transformation, Deloitte Ireland
Shaun Gilbride: Director, Audit & Assurance, Deloitte Ireland
Performance Magazine Issue 49 - Article 7
“Disruption doesn't wait for permission.”1
That statement has never been more relevant. We speak with leaders who are wrestling with the same question: Is AI a true revolution or just another bubble waiting to burst?
It doesn’t matter.
What matters is separating the financial cycle from the operational reality. Today’s “hype” is fueling the construction of massive data centers, expanding energy infrastructure, and accelerating the development of AI capabilities at scale. Even if the stock market corrects itself over time – as it inevitably does – the physical and technological infrastructure will remain.
The bubble builds the road. The technology drives on it.
And the technology is already moving. Generative AI (GenAI) is beginning to reshape how organizations approach market research, operations, compliance and client services. These changes are no longer theoretical; they are unfolding in real time.
In this environment, a “wait and see” approach has quietly become the riskiest move a leader can make.
So, where do we start? How do we cut through the noise and focus on what actually matters?
At Deloitte Ireland, our philosophy is clear and practical: start small, think big and act fast. This isn't just a catchphrase. It’s a roadmap for turning uncertainly into momentum, and it reflects what we're seeing work in practice today.
Starting small means delivering a quick, tangible win, something that clearly demonstrates value and brings your people on board. The most effective place to start is with a clear assessment of your existing operations. In our experience, most processes fall into three simple buckets:
1. The automation bucket, where AI removed the last mile of friction.
These are processes where the heavy lifting of standardization has already been done. Historically, the workflow is structured but still depends on unstructured input, such as non-standard PDFs or ad hoc broker emails. This is precisely the friction point AI agents2 can eliminate. Because the underlying process is already solid, an AI agent can bridge the final gap: interpreting variable inputs and mapping them onto your standardized rails.
2. The replace bucket, eliminating process debt.
One of the most common mistakes organizations make is automating a bad process that only locks inefficiency into the system. AI adoption creates an opportunity to do something better: replace the process entirely with something designed for purpose. For example, instead of building an agent to manage a complex spreadsheet workflow, you can use AI to connect directly to the underlying data sources and eliminate the spreadsheet altogether. The result isn’t just faster, it’s fundamentally cleaner and more resilient.
3. The reimagine bucket, where “start small” connects to “think big.”
AI is not just a cost-reduction tool. It is a growth tool. This is where the conversation shifts. Instead of asking "how do we speed up this process?" we ask a more ambitious question: "What outcome are we actually trying to achieve?." Or even more provocatively, “if we had effectively unlimited reasoning capacity, what new problem could we solve?.” At this stage, the focus moves away from simply accelerating outputs and toward redefining the outcomes themselves.
In the world of GenAI, the classic build versus buy question has evolved from a linear choice into a multidimensional matrix. Acting fast is about ruthlessly identifying where we need the true differentiation and moving quickly without sacrificing governance. To do this effectively, organizations need to think across two layers: the application layer and the model layer, while balancing three strategic imperatives: flexibility, reliability, and differentiation.
This is the interface where portfolio managers, analysts and operations teams interact with the technology. Here we see three paths: adopt, buy and build.
1. Adopt
The first option is to leverage AI capabilities already embedded within the existing enterprise software (e.g., Microsoft Copilot, Google Gemini). This is the path of least resistance. It allows organizations to access GenAI functionality without implementing entirely new systems. Because these features are natively integrated into existing platforms, they benefit from continuous improvements and updates managed by large technology vendors.
In this model, you trade control for speed. You have limited influence over the product roadmap or the underlying models powering the functionality. And if the features don’t perfectly align with the specific use cases, your ability to customize them is constrained.
This approach works best for general productivity, client Q&As, and standard CRM workflows, areas where industry parity is sufficient. For example, relationship managers might use Copilot to summarize a 40-page investment committee pack ahead of a client meeting, while operations teams use embedded AI capabilities to draft routine client communications.
2. Buy
Another option is to buy dedicated, purpose-built GenAI solutions designed for specific business functions. This approach allows organizations to inject cutting edge capabilities into targeted teams without building the technology themselves. Specialist vendors often innovate faster than general-purpose platforms because they focus on solving deep, vertical problems within specific industries.
The primary risks are reliance and integration. Under Digital Operational Resilience Act,3 organizations must carefully assess the solvency and security of emerging AI vendors before allowing them to access sensitive or client data. At the same time, adopting multiple point solutions can easily create data silos if they are not properly integrated into the broader technology architecture.
This model works best for high friction, text heavy business functions such as legal review, regulatory compliance, and documentation analysis. For instance, organizations may deploy regulatory-specific AI tools to review prospectus updates against evolving regulatory guidelines.
That said, technical due diligence is required. Many AI products are little more than simple prompts frameworks built on top of large language models such as Gemini or GPT. Without careful evaluation, organizations may end up paying €50,000 per year for functionality their internal teams could replicate using existing enterprise tools like Copilot.
3. Build
The final option is to develop proprietary GenAI applications tailored to your organization’s unique data and workflows. This path should be reserved for areas tied directly to your core value proposition, where technology can create a defensible competitive advantage. Building a custom solution allows you to embed your organization’s unique investment philosophy, risk frameworks, and decision processes directly into the technology. You retain full ownership of the intellectual property, the product roadmap, and the data governance. This requires significant operational commitment and is resource intensive compared to adopt or buy.
This model is best suited to core investment processes, such as proprietary trading strategies and critical workflows where the methodology itself is a differentiator. For example, rather than relying on a standard market tool, an organization might build a bespoke interface on top of its proprietary credit research database. Portfolio managers could then query decades of internal analyst notes, cross-referenced with live market data, allowing them to generate investment insights that are simply unavailable to the wider market.
If you choose to adopt or buy at the application layer, the decision at the model layer has largely already been made for you by the software vendor. In this case, your role shifts from model selection to due diligence, ensuring that the vendor’s chosen model meets your organization’s privacy, security, and governance standards.
However, if you decide to build at the application layer, a second strategic decision emerges at model layer.
1. Buy “as is”
In this model, you connect your application via Application Programming Interface (API4) to powerful commercial models like GPT, Claude or Gemini. This is typically the fastest route to market. Using an architecture known as retrieval-augmented generation, you can supply the model with relevant context for each query, such as live portfolio data or internal documents. This allows the large language models (LLM) to generate highly specific, organization-aware responses without requiring you to modify or retrain the model itself.
The primary considerations are data sovereignty, cost and model drift. Because queries are processed through an external API, organizations must implement robust privacy controls and governance over what data is shared. Finally, major vendors update their models frequently. Without strict version control and testing, these updates can introduce the model drift, leading to inconsistent outputs over time.
2. Build “hybrid”
In this approach, organizations partner with a cloud provider or model vendor to fine-tune a foundational model using your proprietary data. By training the model on historical investment memos, risk frameworks, and compliance manuals, it begins to understand the organization’s internal language, decision frameworks and institutional knowledge. It recognizes that “risk,” for example, carries a different meaning on a fixed income desk than it does within derivatives trading.
However, this approach is inherently data sensitive. A fine-tuned model is only as strong as the quality of the data used to train it. If sensitive information is embedded in the training data, the model may inadvertently expose elements of that information through carefully crafted prompts. In effect, the model can flatten traditional data access hierarchies if governance controls are not designed carefully.
3. Build “custom”
Traditionally, building a custom model meant training a large LLM from scratch, a path requiring rare expertise, vast datasets, and massive Graphics Processing Unit clusters. Today, a more practical alternative is emerging through small language models. These models are efficient, compact and can be hosted entirely within an organization’s own secure infrastructure. However, developing a proprietary model is still a significant undertaking and should only be considered when three conditions are met:
In 2023, foreign investment surged in India, flowing in from a variety of jurisdictions. The year also saw a spate of regulatory developments that underscored India’s unwavering commitment to fostering economic growth, streamlining investment processes, enhancing transparency, and nurturing a favorable environment for foreign investors.
As the global economy continues to intertwine with India’s financial markets, it’s increasingly essential for foreign investors to understand the country’s regulatory framework and keep abreast of its changes.
This article summarizes the different routes available to foreign investors, taking a closer look at the regulations governing foreign portfolio investments (FPIs) and alternative investment funds (AIFs) in India. It also breaks down the Securities and Exchange Board of India’s (SEBI) rules and compliance requirements for these avenues.
As we look toward the medium term, AI strategy will evolve beyond deploying simple, reactive chatbots toward orchestrating complex, autonomous agents.
Are you ready for that capability leap? In part 2 of this series, “Think big,” we will explore the three non-negotiable pillars of any successful AI strategy: cloud, data, and governance. We will share examples of journeys―bridging the gap between unstructured data and trust by design.
Perfection is the enemy of progress. Start now. Your organization’s AI maturity will only develop through the momentum of execution. The technology is ready. The strategic path is clear. The only remaining variable is the willingness to begin