Skip to main content

The first AI use case: Infrastructure or technical debt?

Building AI that compounds over time

Authors:

  • Cédric Jadoul | Partner, Intelligent Automation
  • Laura Mathieu | Senior Manager, Customer Strategy and Design
  • Camille Peudpiece Demangel | Senior Consultant, Customer Strategy and Design

This podcast episode is based on the Deloitte Luxembourg article below and includes content generated, assisted, or edited using artificial intelligence technology. It has been reviewed by a human prior to publication. The voices featured are synthetic. This podcast is provided for general information purposes only and does not constitute any kind of professional advice rendered by Deloitte Luxembourg. Deloitte Luxembourg accepts no liability for any loss or damage whatsoever sustained by any person who uses or relies on the content of this podcast. 

AI spending will hit US$2.5 trillion in 2026, yet 95% of enterprise pilots fail to deliver measurable value. The gap is a failure of selection, not technology.

The first AI use case does more than solve a problem. It establishes whether an organization is building reusable infrastructure or one-off experiments.

Get it right and momentum builds. Get it wrong and every subsequent initiative inherits that failure. Without clear criteria, many teams default to what's technically impressive rather than strategically sound.

This article reveals a structured methodology to separate ideas worth exploring from ideas worth building. It provides the framework for identifying and executing initiatives that compound over time.

Introduction

Worldwide AI spending will reach US$2.5 trillion in 2026¹, according to Gartner. While Luxembourg boardrooms place their bets, the gap between investment and outcome continues to widen. A 2025 Massachusetts Institute of Technology (MIT) report² reveals that 95% of enterprise generative AI (GenAI) pilots fail to deliver measurable business value or never reach production.

Technology is not the bottleneck. The challenge lies in selection and scalability. While organizations generate numerous ideas—from chatbots to predictive analytics—ideas are not use cases, and use cases are not inherently valuable. Most teams debate which project to prioritize without asking a fundamental question: how do we recognize a use case worth building?

The first AI use case does more than solve a problem; it reshapes the employee and customer experience while identifying operational friction. It establishes the framework for measuring value, governing AI, and building reusable infrastructure rather than one-off experiments that cannot scale.

Success builds momentum for a strategic AI program. Failure forces every subsequent initiative to fight an uphill battle against a poor precedent. The choice is not whether to invest in AI, but whether to select the first use case with deliberation or hope.

The pattern behind failed pilots

Most AI initiatives begin with a question that appears strategic: Where can we apply AI? But this question points teams toward technology demonstrations rather than business solutions.

The result is predictable. Leadership selects use cases that showcase AI capability—impressive in presentations but vague in execution. When success is described through broad terms like "efficiency" and "transformation", teams struggle to quantify the current state in concrete terms. If today’s resources and time consumption remain unmeasured, tomorrow's solution cannot be proven.

This creates two failure modes that kill most pilots before they reach production:

  1. Neglecting governance: Most organizations treat their first use case as an experiment outside normal approval processes because it’s “just a pilot”. When attempting to scale, teams resist the sudden requirement for risk frameworks and approval chains. Effective governance is not a later addition; it must be built from the outset to enable speed while managing risk.
  2. Ignoring technical foundations: Organizations often choose use cases where data is nonexistent or siloed. Teams then spend 80% of their effort building pipelines before AI work begins, creating custom integrations that offer no leverage for future initiatives.

AI does not create organizational weaknesses, it reveals them. Fragmented data, unclear ownership, and inconsistent processes are pre-existing issues that AI simply scales. The technology makes them visible, amplifying either clarity and discipline or confusion and improvisation.

Without clear criteria for what makes a use case worth building, teams default to what's technically impressive rather than what's strategically sound.

Selection as a discipline

Effective use case selection is not brainstorming followed by voting. It's a structured process that separates ideas worth exploring from those worth building.

Most organizations approach this backward, starting with technological capabilities and searching for problems to solve. The better approach starts with quantified business pain and validates the foundation to address it through three deliberate stages:

Source: Deloitte analysis and internal design. Enlarge image

  1. Generation: Consult broadly across operations, finance, customer service, and compliance. Identify where delays, errors, or manual work block strategic objectives. Every operational pain point is also an experience signal; delays and rework shape how customers perceive responsiveness and how employees manage cognitive load.
  2. Characterization: Define success in explicit terms—replace "improve efficiency" with "reduce invoice processing from 72 hours to 12 hours.” This specificity is vital. Gartner research³ shows 30% of GenAI projects are abandoned after proof-of-concept because organizations cannot demonstrate return on investment (ROI) to stakeholders. Characterization forces clarity on value, scope and governance.
  3. Validation: Separate use cases that should be built now from those that should wait. Use the value-to-effort matrix, plotting business impact against estimated effort. Apply three additional filters: time to value (under 12 weeks), stakeholder readiness, and strategic alignment. The strongest candidates sit in the high-impact, moderate-effort quadrant.

Organizations should avoid choosing based on competitor moves or vendor pitches; external validation ignores the unique operational context that determines whether a use case will actually work in an organization.

Operational readiness: Can you actually build it?

Strategic value means nothing if the organization lacks the data, governance, or capability to deliver. Execution readiness requires alignment across three dimensions from day one:

Source: Deloitte analysis and internal design. Enlarge image

  1. Data readiness: The use case needs data that exists today, is accessible, and meets minimum quality standards—not data planned for future collection or that theoretically exists in legacy systems. The critical question is leverage: Does this use case create reusable data infrastructure, or does it generate point-to-point connections that become technical debt?
  2. Governance infrastructure: High-maturity organizations keep AI projects operational for three years or more at twice the rate of low-maturity peers.⁴ What differentiates them? They appoint dedicated AI leaders and distribute accountability across business, risk, legal, and technology. For organizations in Luxembourg's regulatory environment, governance is a design parameter. Navigating the Digital Operational Resilience Act (DORA), the AI Act, and cross-border data requirements from the start ensures solutions are regulation-ready.
  3. Organizational capability: Capability requires more than training people on tools; it requires cross-functional collaboration and a repeatable delivery methodology. Architectural principles should be defined before vendors start pitching solutions. Establish these processes now, even in simplified form, to develop the operational muscle memory required for every subsequent AI initiative.

While the validation process produces a business case with quantified costs, conservative benefit projections, and clear success thresholds, the primary output is not just a validated use case. It's the selection methodology, governance framework, and technical standards that serve as permanent infrastructure for everything that follows.

Conclusion

Successful organizations rarely pick the perfect use case on their first try. Instead, they pick one that forces them to answer hard questions early: How do we quantify value? Who owns accountability? What technical standards will we hold every initiative to?

The path built to get that first use case live becomes the path every subsequent project follows. The governance frameworks, technical standards, and collaboration models established are not temporary scaffolding. They're permanent infrastructure that either accelerates or blocks future growth.

A well-chosen initiative with clear value metrics and achievable scope teaches an organization how to execute AI at scale. The first use case must prove the organization knows how to identify, validate, and execute AI initiatives that compound rather than compete.

Discover our Future of Advice Blog Homepage

¹ Gartner, “Gartner Says Worldwide AI Spending Will Total $2.5 Trillion in 2026,” press release, 15 January 2026.

² Aditya Challapally, Chris Pease, Ramesh Raskar and Pradyumna Chari, The GenAI Divide: State of AI in Business 2025, MIT Nanda, July 2025, p. 3.

³ Gartner, "Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept By End of 2025," press release, 29 July 2024.

⁴ Gartner, "Gartner Survey Finds 45% of Organizations With High AI Maturity Keep AI Projects Operational for at Least Three Years," press release, 30 June 2025.

Did you find this useful?

Thanks for your feedback