Although government is confident that AI can improve citizens’ lives, it still struggles with scaling AI for large-scale deployment. Increasing government’s AI maturity requires pairing human and technical capabilities with strategy and governance.
One of the few bright spots to emerge from the difficult period of the COVID-19 pandemic has been the rapid development of an entirely new class of drug: the messenger RNA-based vaccine. While research into mRNA vaccines was not new, the pace with which multiple companies were able to use that approach to tackle a new pathogen opens new doors into treating everything from other viruses to cancer. These vaccines were not just the product of human genius and resources; artificial intelligence (AI) also played a key role.
AI helped to identify potential molecular “targets” on the virus where vaccines might act.1 As researchers homed in on mRNA as a tool, AI helped to optimise the mRNA sequences for efficacy and ease of manufacture.2Once vaccines were developed, AI continued to help by predicting the spread of the virus to help with testing.3 The story of mRNA vaccines is a success story of collaboration between government and industry that shows the world-transforming power of AI when used at scale.
Given the important mission and large data stores in government organisations at every level, AI is primed to play an important role in the future of government. To reap the transformative benefits of AI, the technology needs to be scaled and our global survey of 500 government leaders shows three key findings for organisations looking to adopt AI at scale:
Those organisational changes will help to drive AI from the fringes of an organisation into the heart of the mission. There AI can bring its transformational power to bear to improve the lives of citizens.
The transformational potential of AI is not lost on organisations at every level of government. For example, in our recent survey of government leaders, respondents at the national, state and local level all saw AI as important to future mission outcomes (figure 1).
The fact that governments are serious about AI adoption is also reflected in the increasing share of AI investments—84% of agencies believe their AI investments will increase by 6% or more in the next fiscal year.4 With budget analysis showing that US Federal funding for AI research and development alone is expected to have already grown by nearly 50% to more than US$6 billion in FY 2021, government leaders are clearly bullish on AI.5As a result, they are making significant investments and exploring new AI projects.
With enthusiasm and a growing pool of resources, many government organisations have launched pilots to explore how AI can help their organisations. Government organisations are exploring a range of AI use cases from speech recognition to predictive maintenance.
Government sectors such as defence and health that have a long history of AI experimentation are among the leaders in fields such as responsible AI and data-sharing. For example, more than 35 countries have released AI strategies that include a focus on responsible AI — a finding backed by our respondents. Eighty-five of surveyed government executives indicated their organisation had an enterprisewide AI strategy.6
However, there is also a weakness in this pattern of pursuing AI. While government respondents are exploring a wide range of AI use cases, they are fully deploying only a small fraction of them (figure 2). This means that despite the significant effort and attention that government organisations are paying to AI, most projects remain at the pilot scale.
While pilots play a critical role in developing successful AI, an overreliance on them can be detrimental. To try to understand how, we analysed the reported capabilities and actions of respondent organisations to evaluate how prepared they were for AI at scale. The analysis showed that while there are a significant number of mature government organisations blazing a trail in AI (28%), a near majority are still beginners (48%, see figure 3). Being a beginner in AI is not necessarily a problem. Even most trailblazing organisations were beginners at one point. The problem many governments face is that their pattern of developing AI mostly through pilots and exploration may be holding back further development.
If organisations only pursue pilots, it can create a sense of overconfidence. As small-scale pilot projects succeed, organisations may mistakenly think that they have all the capabilities they need to tackle AI at scale. We observed signs of this overconfidence in our survey results. Seventy-three per cent of government respondents believe that they are ahead of the private sector in AI capabilities. And as if to reinforce the optimism bias, 80% believed they are also ahead of their public sector peers.
The problem is that AI at scale requires different organisational capabilities than pilots or proofs of concept. Pilots are typically smaller and narrower in focus than full-scale AI efforts. As a result, pilots can often make use of different technologies and data sources than would be required for full-scale use. They may not need to meet as rigorous security and privacy requirements. Further, the smaller scope of pilots means that they touch fewer parts of an organisation so that change management is less of a factor in their success.
For these reasons, development of AI at scale just looks different than pilots. For example, a former CDO of a large US city describes initially being surprised at the slow pace of AI development among peers in the private sector. Only later did the CDO begin to realise that the slower pace may be needed to tackle larger AI projects. The limited scope of pilots may make them easier to pursue more quickly, but for larger-scale projects it takes time to make sure that the right data is gathered, the appropriate use case is chosen and costly mistakes are not made while developing technological architecture. For those just starting out in their AI journey, it can seem counterintuitive that slowing down the process may be a way to achieving AI at scale quickly. Slow is smooth; smooth is fast.7
In short, organisations that have only experimented with pilot-scale AI cannot make it to the heights of at-scale AI simply by doing more of what they are doing. Without intentional action to acquire the organisational capabilities needed for at-scale AI, organisations can easily become stuck in “pilot purgatory” continually cycling through promising AI pilots but never realising the transformational benefit that AI promises for their core mission.
The good news is that government leaders appear to be increasingly aware of the gap between pilots and at-scale AI. The respondents of our survey again and again highlighted the gap between their goals for AI and where they currently assessed their AI capabilities. (figure 4).
The US Department of Defense (DoD) is just one example of the path leading government organisations are taking to AI at-scale. In its 2018 AI strategy, DoD outlined that, “The DoD will identify and implement new organisational approaches, establish key AI building blocks and standards, develop and attract AI talent, and introduce new operational models that will enable DoD to take advantage of AI systematically at enterprise scale.” 8 Since then, DoD has established the Joint Artificial Intelligence Center (JAIC) to better govern AI use cases, set up the Joint Common Foundation (JCF) to provide ready-to-use tools to experiment and scale AI use cases, and started offering new AI career paths to attract and retain talent.9
While organisations like the US DoD are at the forefront of AI in government, other organisations may find it hard to replicate organisational change of that nature. It is comparatively easy to adopt a technology and graft it onto existing organisational structure and business processes. But it is much harder to adapt the organisation to allow it to take full advantage of a new technology. To build the organisational capabilities needed for AI at scale, organisations need to adapt their:
Trailblazing government organisations such as many in the defence and health sectors have already charted the way toward developing these six organisational capabilities. Following their lead can help other organisations to iteratively build capabilities across those six dimensions and realise the transformational benefits of AI at scale.
Senior leaders should ensure AI strategy supports the mission: The focus of an organisation’s AI strategy should not be merely to deploy AI for its own sake but rather should focus on how AI can be an enabler to deliver the organisation’s mission outcomes. This means that an organisation’s AI strategy cannot be a product purely of IT or technical teams but must be driven by senior leaders. Our survey found that organisations where senior leaders communicate a clear vision for AI are 50% more likely to achieve their desired outcomes with AI.10 In the early 2010s, Jeff Bezos mandated that every leader across Amazon develop a plan for how to use AI in their division. That mandate was instrumental in Amazon’s rise to become an AI leader today.11
Organisations where senior leaders communicate a clear vision for AI are 50% more likely to achieve their desired outcomes with AI.
Drive AI into the heart of the mission: AI should be about doing more and doing better. However, our analysis found that organisations that are just beginning their AI journey are more likely to use AI merely to improve internal efficiency. As organisations gain experience and become more mature, they are more likely to use AI for mission-focussed goals such as improving collaboration or creating new programmes. In one large-scale example, Singapore created a US $73 million AI-enabled digital twin of the city, not to make government more efficient, but to model decision-making, experiment with service provision and address some of the most pressing challenges facing the country.12
As organisations gain experience and become more mature, they are more likely to use AI for mission-focussed goals.
Balance outside hiring with reskilling: Our survey found that 69% of respondents would prefer to bring in new hires with required skill sets. Given the widespread shortage of AI talent,13 agencies should balance outside hiring with reskilling their existing workforce. For example, both Denver and San Francisco city governments have established data academies to help train city workers and others in the basic skills needed to harness AI.14 The National Security Commission on Artificial Intelligence (NCSAI) goes a step further, calling for establishing a digital service academy, modelled after US service academies, to produce a trained workforce that caters to all federal agencies.15
Building technical skills is a clear benefit to technical staff but can also help the wider organisation. Government will always need AI specialists, but to adopt AI at scale, it should also improve data literacy for the workers who must buy AI tools and services or use AI to deliver services to citizens. For example, Abu Dhabi has created AI training workshops to help government employees understand AI’s benefits and make better decisions around its utility.16
Reimagine processes and career paths: For government to truly revolutionise the lives of citizens using AI, it will have to revolutionise the way AI is deployed in its business processes and workflows. After all, you cannot deliver new results with old processes. Organisations that have significantly changed workflows are 36% more likely to achieve desired outcomes from their AI projects.17 Introducing new processes can also help organisations create new career paths for workers who work with this technology, which can be a critical enabler to success.18 We found that agencies that added new AI roles are 60% more likely to achieve desired outcomes.19 While adding new roles can help organisations, those benefits may be temporary unless organisations can provide new career pathways for talent to grow and develop. This is exactly what the Australian Public Service and the country’s Digital Transformation Agency collaborated on, defining over 150 new digital roles, and creating the APS Career Pathfinder tool to help people in those roles explore digital career options in government.20
Organisations that have significantly changed workflows and added new AI roles are 36% and 60%, respectively, more likely to achieve desired outcomes from their AI projects.
Identify relevant data and determine its accessibility: Agencies that have access to the necessary data are twice as likely to exceed expectations in their AI initiatives.21 To make the best use of AI, agencies need to identify relevant datasets and develop platforms to access that data. For instance, the US Air Force has adopted the VAULT data platform which gives airmen access to the cloud-based data and tools they need to use AI to improve readiness and mission success.22
Agencies that have access to the necessary data are twice as likely to exceed expectations in their AI initiatives.
Document and enforce MLOps: Developing and deploying AI is not without ethical risks. That is why having clear documentation and enforceable processes is important to having trustworthy and transparent AI. This is where MLOps—the set of automated pipelines, processes and tools that streamline all steps of AI model construction—can help. After all, it is difficult to address ethical issues with a model unless you know how that model was built and operated. In fact, our survey found that documenting and enforcing MLOps makes organisations twice as likely to achieve goals and three times more likely to be prepared for AI risks.23 Organisations like the Internal Revenue Service (IRS) have discovered that scaling AI beyond the pilot stage across the agency requires adopting different and rigorous processes for creating and managing AI models.24
Documenting and enforcing MLOps makes organisations twice as likely to achieve goals and three times more likely to be prepared for AI risks.
Prioritise change management. If AI is to be successful, it will, by definition, be disruptive for government organisations. AI can change not only how processes are done, but even what services government delivers to its citizens. Our analysis indicates organisations that invest in change management are 48% more likely to report that AI initiatives exceed expectations.25 However, the more significant the change brought by AI, the more difficult it can be. Governments should use the principles of behavioural economics to understand the human impact of transformations and how to provide appropriate support to encourage change.26
Organisations that invest in change management are 48% more likely to report that AI initiatives exceed expectations.
Build a diverse ecosystem: Every government agency does not need to solve every problem itself. From chatbots to speech-to-text, many solutions to technical problems already exist. Tapping into other entities that have existing technical solutions or solved organisational challenges can accelerate progress toward AI at scale. In fact, our survey found that continually cultivating a wide range of relationships with industry, academia and other agencies dramatically improves the likelihood that an organisation has what it needs to scale AI (figure 5). As Eileen Vidrine, chief data officer at the US Air Force says: “It's really about working together, building collaborative, trusted partnerships. It needs to be part of the conversation at the beginning and through the whole life cycle about trying to optimise interoperability and avoiding what I would call ‘vendor lock’ as much as possible.”
Find partners that complement your need: Find partners that provide the capabilities your particular agency lacks. These partners should be a wide variety of different organisations depending on where government needs help (figure 6). Partners don’t always need to be organisations at all. The City of LA’s Data Angels programme brought volunteer data scientists into government on a part-time basis to help with a variety of tasks.27 The programme tapped into private sector data specialists who wanted to help the community while still retaining their jobs, bringing some of the top data talent into public service with little cost to the government.
AI is the future. Government leaders clearly understand this. But getting to that future can be more difficult and more rewarding than it may seem at the start. Taking a realistic view of the challenges inherent in developing AI at scale can help government develop the right capabilities, the right strategies and the right governance to make sure that the AI of the future serves the citizens of the future.
The Deloitte AI Institute for Government is a hub of innovative perspectives, groundbreaking research and immersive experiences focussed on artificial intelligence (AI) and its related technologies for the government audience. Through publications, events and workshops, our goal is to help government use AI ethically to deliver better services, improve operations and facilitate economic growth. We aren’t solely conducting research—we’re solving problems, keeping explainable and ethical AI at the forefront and the human experience at the core of our mission. We live in the Age of With—humans with machines, data with actions, decisions with confidence. The impact of AI on government and its workforce has only just begun. To learn more visit Deloitte.com