Skip to main content

Time, technology, talent

The three-pronged promise of cloud ML

Start with the business case, assess your data and model requirements, invest in the right talent, and be willing to fail fast to reap the maximum benefits from cloud machine learning.
Ashwin Patil
Jonathan Holdowsky

AS organizations look to use machine learning (ML) to enhance their business strategies and operations, the retailer Wayfair looked to the cloud. Wayfair believes what so many other organizations seem to—that the cloud and the fully integrated suite of services it offers may give organizations the best approach to create ML solutions with time to market, existing resources, and available technologies. For its part, Wayfair pursued the cloud for its scalability—key for retailers with varying demand levels—as well as a full suite of integrated analytics and productivity tools service offerings to drive actionable insights. The results? Faster insights into changing market conditions to save time. Immediate access to the most current ML technologies. And tools of productivity that can drive efficient talent.1

Indeed, ML is transforming nearly every industry, helping companies drive efficiencies, enable rapid innovation, and meet customer needs at an unprecedented pace and scale. As companies increasingly mature in their use of ML, the heavy reliance on large data sets and the need for fast, reliable processing power are expected to drive the future of ML into the cloud.

The well-known benefits of the cloud—including modular, elastic, rapidly deployable, and scalable infrastructure without heavy upfront investment—apply when bringing ML into the cloud. Cloud ML additionally introduces cutting-edge technologies, services, and platforms—including pretrained models and accelerators—that provide more options for how data science and engineering teams can collaborate to bring models from the lab to the enterprise. Indeed, because of all these advantages, time, technology, and talent—which might otherwise serve as challenges to scaling enterprise technology—comprise the three-pronged promise of cloud ML (see sidebar, “The measurable value of cloud ML”).

The recognition of this potential has fueled rapid growth: The current cloud ML market is estimated to be worth between US$2 billion and US$5 billion, with the potential to reach US$13 billion by 2025. What such growth says is that adopters are increasingly realizing the benefits of cloud ML and that it is the future of artificial intelligence (AI). And with just 5% of the potential cloud ML market being penetrated according to one estimate, the community of adopters should only deepen across sectors and applications.2

To explore the potential cloud ML represents for organizations looking to innovate, we conducted interviews with nine leaders in cloud and AI whose perspectives largely inform the content that follows. In this paper, we provide an overview of three distinct operating models for cloud ML and offer insights for executives as they seek to unlock the full potential of the technology.

The measurable value of cloud ML

In the 2020 Deloitte State of AI in the Enterprise survey, 83% of respondents said that AI will be critically or very important to their organizations’ success within the next two years. Around two-thirds of respondents said they currently use ML as part of their AI initiatives. And our survey suggests that a majority of these AI/ML programs currently use or plan to use cloud infrastructure in some form. Our additional analysis of the survey data for this research showed that cloud ML, specifically, drives measurable benefits for the AI program (figure 1) and generally improves outcomes as compared to nonspecific AI deployments:

  • Forty-nine percent cloud ML users said they saw “highly” improved process efficiencies (vs. 42% overall survey results)
  • Forty-five percent saw “highly” improved decision-making (vs. 39% overall)
  • Thirty-nine percent experienced “significant” competitive advantage (vs. 26% overall)

Importantly, these data points showed that cloud ML tools appear to make a notable impact on longer-term, strategic areas, such as competitive advantage and improved decision-making, when compared to general AI adoption. This is an important distinction and shift in the approach organizations are taking to run their AI programs (away from on-premise AI platforms), and a trend that is expected to continue as cloud ML takes greater focus for global organizations.

The addition of low-code/no-code AI tools, which are part of some cloud ML service offerings, can extend these advantages with responses showing additional gains in decision-making, process efficiencies, and competitive advantage and other areas. For instance:

  • Fifty-six percent of low-code/no-code cloud ML users said they saw “highly” improved process efficiencies (vs. 49% in cloud ML users vs. 42% overall)
  • Fifty-four percent saw “highly” improved decision-making (vs. 45% in cloud ML users vs. 39% overall)
  • Forty-nine percent experienced “significant” competitive advantage (vs. 39% in cloud ML users vs. 26% overall)
  • Low-code/no-code cloud ML users saw additional gains in areas that general cloud ML users did not see with outcomes such as new insights and improved employee productivity, among others.

The emergence of low-code/no-code tools is part of a broader trend toward greater accessibility to cloud AI/ML solutions and a more diverse user community. These positive survey results suggest that this trend may only strengthen in the near to medium term.

Show more

Cloud ML: Adopting the right approach

“AI is a tool to help solve a problem,” says Rajen Sheth, vice president, AI, Google Cloud. “[In the past,] people were doing a lot of cool things with AI that didn’t solve an urgent problem. As a result, their projects stopped at the concept stage. We now start with a business problem and then figure out how AI solves it to create real value.”

As organizations look to advance their ML programs supported by the cloud, there are three basic approaches to cloud ML (figure 2) they can take—cloud AI platforms (model management in the cloud); cloud ML services (including pretrained models); and AutoML (off-the-shelf models trained with proprietary data).

Selecting the right approach starts with understanding the business context—what problem the organization is trying to solve, the technology context (to what degree the organization has data and models on-premises or in the cloud already), and the talent context (to what extent the organization has the right people resources in place to build and train models).

There is no one-size-fits-all approach. Depending on the business problem, multiple approaches may be employed at the same company. As Gordon Heinrich, senior solutions architect at Amazon Web Services, says, “At the end of the day, it comes down to where you can put ML and AI into a business process that yields a better result than you are currently doing.” Building a model in-house requires significant investment of time in data collection, feature engineering, and model development. Depending on the specific use case and its strategic importance, companies can employ cloud ML services, including pretrained models, to help speed up time to innovation and deliver results.

Conversational AI: This is one use case that came up repeatedly during our research where cloud ML and pretrained models seem particularly well-positioned to help solve business, technology, and talent challenges. Conversational AI is frequently applied to help improve call center operations and enhance customer service. In fact, one estimate suggests that by 2025, the speech-to-text technology will account for 40% of all inbound voice communications to call centers.4

Organizations can use conversational AI to support customer service improvements in many ways—from helping answer customer questions directly via chatbots to supporting contact center staff in the background with next-best answer technology. These capabilities bring great value to customer service today. However, it takes a tremendous amount of work, time, and resources to continuously enhance conversational AI models, and it is not differentiating enough for most companies to build these models in house. So, most companies are employing large cloud vendors’ pretrained models instead of building their own conversational AI models.

Broadly speaking, there is no single approach to a cloud ML initiative. But any approach should contemplate a complex array of potentially deep and vast issues that are strategic, technical, and cultural in character and range the full arc of the project lifespan.

For general cloud ML use cases, companies should think through comparable aspects of business, technology, and talent to determine the right approach, while keeping in mind the following high-level recommendations:

  • Business context: Define the problem statement.
    • The most impactful applications for cloud ML typically start with identifying the broader business problem AI is suited to help solve—whether it be scaling the digital business, enhancing customer experience, or addressing risk management challenges. Executives should then examine the business problem from both an industrywide and company-specific perspective.
  • Technology context: Understand which data/models you have and/or need and where they reside.
  • Consider how long it will take to get access to the data and to develop and test the models as it relates to time, resources, risks, and deployment needs. Pretrained models or AutoML solutions may offer time and resource savings for initiatives that are less strategically differentiating for your business.
    • Consider program goals and outcomes. Starting small allows teams adequate time to test and refine the model and also to operationalize and scale it over time. Think about easy-to-achieve targets, mid- to longer-term business outcomes, and value.
  • Talent context: Assess data, ML, cloud, and data science expertise across your team.
    • Review what talent you have to advance your ML initiatives and what talent you need to achieve your stated goals.
    • Assess areas where cloud ML services could save training time, optimize the potential of your existing talent, or fill in resourcing gaps.

Although not comprehensive by any means, thoughtful evaluation of these three representative areas should point teams in the right direction as they assess which applications are most appropriate for cloud ML. It can also help ensure valuable talent hours are utilized strategically toward the company’s most differentiating features and initiatives.

Use cases: Cloud ML adoption across industries

The broad applicability of cloud ML business archetypes has driven rapid adoption across industries. Analysts, however, have indicated a standout opportunity for cloud ML to solve business challenges at scale for industries with significant call center footprints as well as supply chain challenges.5

As Eduardo Kassner, chief technology and innovation officer, One Commercial Partner Group, Microsoft, says, “Certainly, there is a big evolution in cloud ML services and APIs across a number of industries. We are seeing increased activity in manufacturing where you bring together IoT, big data, and advanced analytics.” 

In figure 3, we have compiled examples from six different industries where cloud ML has helped solve a range of tangible and definable business challenges. These companies have drawn on distinct cloud ML technology services such as speech and vision APIs and pretrained models for recommendation, fraud, and inventory management to advance their AI programs and achieve tangible business outcomes.

Talent approach: Cloud ML is a team sport

“When talking about talent in the AI/ML space, think of a staircase with four steps,” suggests Kassner. “The first step is the developers who leverage cognitive services APIs in their applications. The next step is developers who know how to call APIs for process automation, customer support, object detection, among other scenarios of cognitive services. The third level in the talent staircase is developers who understand big data, data cataloging, data warehousing, data wrangling, and data lakes. The fourth step is the one where specific algorithms and customer models are to be created with ML and AI Ops, which may require data scientists or data analysts."

To achieve the business objectives of any cloud ML initiative, data science professionals typically work with a well-balanced talent team—cloud engineers, data engineers, and ML engineers. But PhD-level data science talent is scarce.6 As a result—and as our interviewees repeatedly confirmed—many companies are training a growing number of developers and engineers to contribute alongside them as non-PhD data scientists.

Some speculate that pretrained model and AutoML capabilities could reduce the need for scarce PhD-level data science talent and thereby help “democratize” ML. However, we recommend viewing these capabilities as a way to extend the reach of deep data science expertise across a broader team made up of new types of specialists. With cloud ML, pretrained models and AutoML tools have “hidden” the complexity and blurred the lines between previously distinct roles. This often empowers ML engineers to fill the data science PhD talent gap and affords adopters greater flexibility in how they staff teams. And, as these tools get more nuanced and powerful, such flexibility should only grow. Ultimately, however, as an organization’s modeling needs evolve and become more complex, a PhD-trained data scientist may become indispensable.

MLOps and governance: Maintaining rigorous quality and efficiency

The term MLOps, or “machine learning operations,” refers to the steps that an organization takes as it develops, tests, deploys, and monitors cloud ML models over their life cycle. In effect, these steps come to represent a set of governance practices that ensure integrity throughout the life cycle. As development operations (DevOps) become increasingly automated due to intensifying pressure for frequent model releases, these steps take on greater importance and urgency—especially as companies become more cloud-focused in their infrastructure.

Commenting on the importance of this trend, Heinrich notes, “[MLOps] is about explainability and getting a notification when a model is starting to drift. It is about making the data pipeline easier to use with data tagging and humans in the loop to achieve high-quality results. It really helps businesses feel more confident about putting algorithms into production.” 

While there is no single accepted global standard for MLOps process and best practices, a comprehensive end-to-end MLOps program should typically include four basic elements: versioning the model, autoscaling, continuous model monitoring and training, and retraining and redeployment (figure 4).

Model bias: A related concept within a broader ML governance program is model bias. In the past few years, we have seen instances of bias creeping into AI/ML models. In fact, a recent experiment revealed that an AI vision model associated various pejorative stereotypes with gender, race, and other characteristics.7 ImageNet is one of a number of projects underway to identify and mitigate the role that bias plays in AI.8

According to Sheth, as cloud ML moves toward providing decision support or recommendations, the issue of bias becomes even more acute because multiple human judgment factors are manufactured into the model. “There are tools that help explain why the model gave a particular outcome and where there are biases and where there aren’t and what impact such biases will have on people in general. Organizations will have to dig deep to understand what is happening and supplement the data accordingly. It’s a painstaking process,” he says.

One cloud ML area where bias manifests frequently is pretrained models. When developing and training models in-house, the development team can implement safeguards and testing protocols to lessen the effect of bias since control of the process and training data rest with them. In contrast, they are less likely to be aware of bias when using pretrained models, as they have typically limited visibility into how a model was trained and thus how much bias, if any, was present prior to model deployment. Under such circumstances, testing for bias after model deployment requires heightened scrutiny using a variety of known data sets so that the results can be detected based on their output. (For more information on AI ethics and avoiding bias, see Deloitte's Trustworthy AITM framework.)

Five key considerations to set up cloud ML initiatives

Cloud ML can be transformational, and there is strong evidence that organizations are already reaping its benefits in wide-ranging initiatives across industries, use cases, and archetypes. Based on our research and interviews with industry leaders, here are some important points for organizations to keep in mind while starting out on their cloud ML journey:

  • Strategic alignment. Identify a business problem ripe for transformation that has the highest potential to benefit from AI augmentation. The wrong process may lead to frustration and wasted resources. Gaining both awareness and enthusiastic buy-in from the highest level of leadership into these initiatives can help identify meaningful problems to solve and maximize success.
  • Orchestrated talent. Remember that cloud ML is a team sport. It is not just about modeling. Investing in the right balance of data science and ML, cloud, and data engineering talent brings cloud ML to life.
  • Cost savings and elasticity of cloud. Take advantage of the opportunity cloud provides to bring infrastructure cost from capex to opex. Given the extraordinary stress on infrastructure, cloud provides the elasticity to use infrastructure on demand.
  • Data requirements. Know the quality and quantity of the training data you have and also what data you need and how you are going to get it.
  • Importance of trial and error. Embrace the notion of “failing fast.” AI needs experimentation and innovation. It is not a tried-and-tested recipe.

By putting some forethought into these key recommendations and gaining a thorough understanding of what they want to achieve, organizations can optimally employ cloud ML. They can thus reap the three-pronged promise of cloud ML—extending the reach of critical ML talent and technology investments and cutting time to innovation.

Deloitte Cloud Consulting Services

Cloud is more than a place, a journey, or a technology. It's an opportunity to reimagine everything. It is the power to transform. It is a catalyst for continuous reinvention—and the pathway to help organizations confidently discover their possible and make it actual. We enable enterprise transformation through innovative applications of cloud. Combining business acumen, integrated business and technology services, and a people-first approach, we help businesses discover and activate their possibilities. Our full spectrum of capabilities can support your business throughout your journey to the cloud—and beyond.

Learn more

The authors wish to thank Rajen Sheth (Google Cloud), Eduardo Kassner (Microsoft, One Commercial Partner Group) and Gordon Heinrich (AWS), as well as Deloitte professionals David Kuder, James Ray, Jason Chiu, David Linthicum, and Scott Rodgers, for the time they spent in sharing their invaluable insights. The authors would also like to thank Lisa Beauchamp, Amy Gonzalez, Claire Melvin, and Saurabh Rijhwani for their marketing support. Negina Rood provided vital research support and Jay Parekh, critical analytical insights. Finally, this paper would not have been possible without the overarching and tireless support of Siri Anderson.

Cover image by: Traci Darbenko

  1. Aithority, “Wayfair chooses Google Cloud to help scale its growing business, while creating engaging consumer experiences,” January 13, 2020; Angus Loten, “Pandemic has online sellers leaning on cloud,” Wall Street Journal, August 24, 2020.

    View in Article
  2. Ed Anderson and David Smith, Hype cycle for cloud computing, 2020,” Gartner, August 1, 2020.

    View in Article
  3. MIT Sloan Management Review, “How AI changes the rules: New imperatives for the intelligent organization,” February 10, 2020.

    View in Article
  4. Anthony Mullen, Bern Elliot, and Adrian Lee, Market guide for speech to text solutions, Gartner, April 22, 2020.

    View in Article
  5. ABI Research, Cloud-based AI in a post-COVID-19 world, 2020.

    View in Article
  6. Emilia Marius, “How to solve the talent shortage in data science,” InformationWeek, April 17, 2019.

    View in Article
  7. Will Knight, “AI is biased. Here’s how scientists are trying to fix it,” Wired, December 19, 2019.

    View in Article
  8. ImageNet website.

    View in Article

Did you find this useful?

Thanks for your feedback

If you would like to help improve Deloitte.com further, please complete a 3-minute survey