Skip to main content

AI maturity

obstacles, weaknesses and opportunities

This blog captures Phil Bolton’s responses to some tricky, challenging questions about AI – and how it can be successfully scaled by businesses. We know that AI scaling and maturity requires an evidence-based culture, in which people are encouraged to ask questions. It also requires excellent data and analytics capabilities and maturity, so AI and machine learning outcomes provide businesses with a good experience and tangible benefits.

In this blog we ask the tough questions to further explore what it takes to scale AI, understand its weaknesses, and take a closer look at the use cases required to deploy production, enterprise-grade solutions. Phil Bolton is a Deloitte Partner, Adjunct University Professor and AI specialist. He helps us lift the lid on what it takes to really scale AI. Phil is based in Melbourne.

Doing AI is hard! But scaling AI is even harder... 
That’s when an organisation really starts to realise the exponential value possible from AI. To scale AI, organisations need to have three foundational concepts in place:

A data-driven culture – I realise that gets thrown around a lot these days, but fundamentally, you need to have a very healthy demand for analytics in your organisation. You need to establish that desirable pipeline of ‘crunchy’ business challenges, otherwise, there’s no point having AI if everyone’s just going to ignore the insights generated from it.

A high performing analytics team – Great talent (along with clearly defined career paths) is a must, coupled with rich data and the necessary shopping list of infrastructures, technology, tools, business engagement, executive sponsorship, processes, methodologies, frameworks, analytics disciplines, etc…. Because this is “how” you get AI work done well. If you can’t do it well, you could lose support from the business, or not have efficient and scalable processes in place (in which case you won’t be able to scale because it’s your analytics team that has become the bottleneck).

MLOps (Machine Learning Dev Ops): For some, this is a pretty new field that comes with an emerging set of tools and disciplines. MLOps is about making sure that all the good work done by the teams so far actually makes its way into a production environment, where it gets connected with business processes and other systems. Again, all best efforts can fail here if:

  • the model doesn’t move off the data scientists’ laptops
  • it’s not scheduled and running on a regular basis
  • it’s not monitored and can start drifting and going rogue without warning
  • it’s not connected to anything or anyone, or no actions are being informed by it.

Many companies are grappling with competing issues, challenges, culture, and capability across all three concepts. We’re working with clients who have varying degrees of maturity across each of these three foundational concepts – and it’s a really exciting and rewarding journey to be on with them.

Not all weaknesses are equally important. I don’t think it’s the tools; they are rarely the limiting factor to analysing, synthesising, and visualising data insights. It’s also not necessarily the infrastructure. Sure, infrastructure can limit your ability to scale, automate and make processes repeatable. But it doesn't have to limit insight generation… once the organisation is actually realising value from those insights, the business case to lay those solid sustainable infrastructure foundations should write itself.

One important weakness to be wary of is the lack of AI use case execution and delivery processes. Different people are doing different things in different ways, so their customers are having different experiences, and there isn’t enough discipline around how things are being done, so organisations aren’t building reusable assets, and there isn’t enough transparency around what's going on… all of these interlinking threads cause issues now, and well into the future.

Maturity challenges can emerge from different areas. The end-to-end delivery of data science is a really complex, fragile, and relatively new value chain. A weak link – or weak capability is a fundamental part of that insight generation process – can undermine the whole value chain and leave the best efforts of any team in the dust. A substandard capability across multiple parts of the data and analytics process will undoubtedly result in poor outcomes, perpetuating a lack of trust and unwillingness to engage in future AI use cases.

To be a little more specific, the most common challenges I’ve seen are:

Limited AI fluency

This is about the right people actually knowing what AI is, and what AI insights can be used for, so they can start thinking about the opportunities it unlocks. These people are the ones who should be building the pipeline of high priority questions AI can help answer.

Variety in tools

  • There’s lots to choose from!
  • Most tools can get the job done
  • Sure, some are better than others
  • But you’re better off getting a talented team who can start working with what you’ve got than waiting to try and find the best tool.

Infrastructure foundations

Sure, it can limit your ability to scale; but it doesn’t have to limit your insight generation. Once the organisation is actually realising value from the insights, the business case to lay sustainable foundations should write itself.

Data quality (DQ)

I follow a few simple principles:    

  • The data will never be perfect, you need to be OK with that
  • You just have to get in there, and get started
  • If you can make a smarter business decision sooner rather than later, you should!

Scaling AI successfully typically comes down to a combination of factors. Let’s focus on two of the most important ones.

First, business engagement from the start is key. AI scaling requires Executive sponsorship, support, and investment. The business needs to be engaged to build out that pipeline of high priority questions or business challenges to be addressed – because an analytics team should not be the ones that prioritise the business problems they work on. Having the business teams ‘inside the tent’ while delivering on those analytics use cases means data scientists stay on track and deliver to the brief.

Second, be clear on what levers are going to be pulled in the business once the insights have been generated. And, almost more importantly, make sure you have permission to pull those levers! For example, why would you work on a workforce optimisation use case if you don’t have the necessary permission to change your workforce rosters? Or why would you work on a geospatial optimisation project for retail stories if you don’t have the budget or permission to change which stores are going to close, open, or be refurbished?
Make sure you can act on the insights.

Make sure you can act on the insights.

Before starting on any new AI initiative, I highly recommend taking a disciplined approach that incorporates these critical success factors. It’s a really exciting time to be part of this emerging field, with new disciplines, tools, and ideas. We’re living and working in the age of co-design and it’s all about finding the best blend of automation, artificial intelligence, and those uniquely human elements so that we can make the biggest impact together.