Artificial intelligence in all its forms can enable powerful public sector innovations in areas as diverse as national security, food safety and health care - but agencies should have a holistic AI strategy in place.
The city of Chicago is using algorithms to try to prevent crimes before they happen. In Pittsburgh, traffic lights that use artificial intelligence (AI) have helped cut traffic times by 25 per cent and idling times by 40 per cent.1 Meanwhile, the European Union’s real-time early detection and alert system (RED) employs AI to counter terrorism, using natural language processing (NLP) to monitor and analyse social media conversations.2
Such examples illustrate how AI can improve government services. As it continues to be enhanced and deployed, AI can truly transform this arena, generating new insights and predictions, increasing speed and productivity and creating entirely new approaches to citizen interactions. AI in all its forms can generate powerful new abilities in areas as diverse as national security, food safety, regulation and health care.
But to fully realise these benefits, leaders must look at AI strategically and holistically. Many government organisations have only begun planning how to incorporate AI into their missions and technology. The decisions they make in the next three years could determine their success or failure well into the next decade, as AI technologies continue to evolve.
It will be a challenging full stop. But such transformations, affecting as they do all or most of the organisation and its interactions, are never easy. The silver lining is that agencies have spent the last decade building their cloud, big data and interoperability abilities, and that effort will help support this next wave of AI technology. AI-led transformation promises to open completely new horizons in process and performance, fundamentally changing how government delivers value to its citizens.
But this can’t be achieved simply by grafting AI onto existing organisations and processes. Maximising its value will require an integrated series of decisions and actions. These decisions will involve complex choices: Which applications to prioritise? Which technologies to use? How to articulate AI’s value to the workforce? How can we manage AI projects? Should we use internal talent, external partners, or both?
This study outlines an integrated approach to an AI strategy that can help government decision-makers answer these questions and begin the effort required to best meet their own needs.
Without an overarching strategy, complex technology initiatives often drift; at best, they fix easy problems in siloed departments and at worst, they automate inefficiencies. To achieve real transformation across the organisation and unlock new value, effective AI implementation requires a carefully considered strategy with an enterprisewide perspective.
But AI strategy is still pretty new—and for many organisations, nonexistent. According to a recent IDC survey, half of responding businesses believed artificial intelligence was a priority, but just 25 per cent had a broad AI strategy in place. A quarter reported that up to half of their AI projects failed to meet their targets.3
And government is behind the private sector on the strategy curve. In a 2019 survey of more than 600 US federal AI decision-makers, 60 per cent believed their leadership wasn’t aligned with the needs of their AI team. The most commonly cited roadblocks were limited resources and a lack of clear policies or direction from leaders.4
But a coherent AI strategy can attack these barriers while building a compelling case for funding. A winning plan establishes clear direction and policies that keep AI teams focussed on outcomes that create significant impacts on the agency mission. Of course, strategy alone won’t realise all the benefits of AI; that will require adequate investments, a level of readiness, managerial commitment and a lot of planning and hard work. But an effective AI strategy creates a foundation that promotes success.
For some, an AI “strategy” is simply a statement of aspirations—but since the lack of planning frustrates and confuses implementation efforts, this definition is clearly inadequate. Other strategies treat AI purely as a technical challenge, with a narrow plan based on a handful of incidents. This limited focus risks missing opportunities and the organisational changes needed to produce a truly transformational impact on performance and mission.
An effective strategy should align technological choices with the overarching organisational vision and, drawing on lessons learnt from past technology transformations, incorporate both technical and managerial perspectives.5 A holistic strategy, in turn, should support the broader agency strategy as well as federal goals for AI adoption. Because of the rapid evolution of AI, strategies also should be updated periodically to keep up with technological developments.6
What does this mean for government leaders charged with AI strategy? The scholar Michael Porter has observed that “The essence of strategy is choosing what not to do.”7 With so many different types of technology and potential opportunities, leaders with limited resources should decide carefully about the use of AI—and the choices can seem overwhelming.
In their book Playing to Win, A.G. Lafley and Roger Martin introduce a pragmatic framework that we build upon here.8 The strategic choice cascade (figure 1) is adapted here for government AI. It’s based on the premise that strategy isn’t just a declaration of intent, but ultimately should involve a set of choices that articulate where and how AI will be used to create value and the resources, governance and controls needed to do so.9
This approach has five core elements - five sets of critical choices - that, as a whole, comprise a clearly articulated strategy. The first choice an organisation must make concerns vision, specifically its level of AI ambition and the related goals and aspirations. With that vision as a guide, the next two elements concern choosing where to focus AI attention and investment, in terms of problem areas, mission demands, services and technologies and how to create value in those areas, including an approach to piloting and scaling. The final two choices in the cascade concern the capabilities and management systems required to realise the specified value in focus areas. They answer questions such as, “What culture and capabilities should be in place?” including personnel, partners, data and platforms; and “What management systems are required?” including performance measures, change management, governance and data management.10
These five sets of choices link together and reinforce one other,11 like the DNA’s double helix, with entwined strands including technology as well as managerial and organisational choices. They ensure that the agency’s AI goals are clearly linked to business outcomes and identify the critical activities and interconnections that can help achieve success. Figure 1 illustrates some of the individual choices at each stage through the twin lenses of management and technology.
You need both strands to capture the full value of AI. Without understanding the technology and its potential, decision-makers can’t identify transformative applications; without a managerial perspective, technological staff can fail to identify and address the inevitable change-management challenges. Regardless of their role within the organisation, however, government planners must sift through an assortment of potential issues, including changing workforce roles, as well as the data security and ethical concerns that arise when machines make decisions previously made by people.
And these issues are amplified when an AI programme’s goal is not just incremental but dramatic improvement.
Let’s take a closer look at each element of the strategic choice cascade and see how government agencies are using them in their AI strategies.
Transform the Department of Energy into a world-leading AI enterprise by accelerating development, delivery and adoption of AI.
—Artificial Intelligence and Technology Office, US Department of Energy12
In 2019, the US Department of Energy (DoE) issued this statement of its AI goals and aspirations to transform its operations in queue with national strategy. The US Office of the Director of National Intelligence issued a similar vision statement, declaring that it will use AI technologies to secure and maintain a strategic competitive advantage for the intelligence community.13 Both examples represent a high level of AI ambition, directed towards broad, mission-focussed transformation.
Other agencies may have less ambitious goals; they may wish to use AI to address a particular, long-standing problem, to redesign a specific process, to free up staff, increase productivity, or improve customer interactions. But while government AI strategies can have multiple objectives and ambitions, all of them must consider the well-being of people and society. For example, AI-based intelligent automation can help make decisions for both simple and complex actions. But agencies that contemplate delegating more decision-making to machines must be able to understand and measure the resulting risks and social impacts, as well as the benefits. Thus, communicating and socialising the AI strategy with citizens is key to gaining wider acceptance for AI in government.
For this reason and others, the organisation's goals, aspirations, or requirements regarding the ethics of AI should be involved in this and at all levels of the cascade.
Regardless of the ambition, listing aspirations and goals is only a first step. As the vision crystallises, the next steps—the identification of opportunities and execution requirements—come into focus. As the DoD writes in its AI strategy, “Realising this vision requires identifying appropriate use cases for AI across the DoD, rapidly piloting solutions and scaling successes across the enterprise.”
What particular aims, ambitions and requirements do we have for AI?
Which elements of our wider strategy and aims will AI support?
What is the long-term ambition behind our investment in AI?
How will our organisational values help us address questions of ethics, privacy and transparency?
Will AI generate savings or some other positive outcome to justify the investment?
Focus: Where should we concentrate our AI investments?
Focus: Where should we concentrate our AI investments?
The Department of Defence aims to apply AI to key mission areas, including:
—US Department of Defence AI strategy14
Applications based on AI can reduce backlogs, cut costs, stretch resources, free workers from mundane tasks, improve the accuracy of projections and bring intelligence to scores of processes, systems and uses.15 A variety of solutions are available; the important question is the choice of problems and opportunities.
The DoE focuses its technology-based initiatives in this way:
The mission of the Energy Department is to ensure America’s security and prosperity by addressing its energy, environmental and nuclear challenges through transformative science and technology solutions.16
Its budget priorities reflect this commitment. Much of DoE’s planned US$20 million for research and development in AI and machine learning will go towards two goals: more secure and resilient power grid operation and management and research towards transformative scientific solutions.17
Specific statements of intent provide “guardrails” for DoE decision-makers, keeping them focussed on priority needs. Each choice involved may be explained and championed separately, but, in practise, each choice must align with and reinforce others. In this case, DoE’s “where to focus” choice was reinforced by its “management systems required” decision to launch the Artificial Intelligence and Technology Office in September 2019. The organisation will focus on accelerating and coordinating AI delivery and scaling it across the department.18
Leaders deciding where to focus their AI efforts can consider the question through several different lenses. One might consider which problems to address; what processes to focus on; or what part of the organisation would benefit most from the investment. There’s no one “best way” and in fact looking through multiple lenses can be helpful.
Figure 2 illustrates potential AI applications in human services, with examples from all three lenses. In human services, AI has considerable potential to transform processes for caseworkers and administrators, improve service and enhance mission outcomes. Policymakers will have to determine how to prioritise advanced technological solutions. Should AI be used to automate case documentation and reporting? To create chatbots to answer queries? To compare our beneficiary programmes with those of other organisations, to better identify the optimal mix of services? Or should we strive for all three goals?
A broad initial view can help government agencies find the right blend of uses. It can help them avoid becoming focussed on a single class of applications—simple automation of back-office tasks, for example—at the expense of more transformative opportunities.
Every agency faces a similar set of questions about where to focus its activities and how to balance quick solutions with those promising longer-term transformation. A carefully articulated AI strategy that clearly establishes priority focus areas should answer these questions. To do so, it must consider both managerial and technological perspectives.
The technological focus should be determined by the mission need. Broadly, the pool of available technologies roughly corresponds with each of the bubbles in figure 2. Intelligent automation tools, also called robotic process automation, are highly relevant to back-office functions. Customer interaction or engagement technologies bring AI directly to the customers, whether they’re citizens, employees, or other stakeholders. Finally, a variety of AI insight tools can identify patterns or develop predictions that can be highly relevant to many agency missions.
For agencies beginning their AI journey, this level of detail will be sufficient. Others may benefit from a deeper dive into the opportunities created by specific technologies being deployed in a variety of government settings, separately or in concert. These applications include intelligent robotics, computer vision, natural language processing, speech recognition, machine translation, rules-based systems and machine learning (figure 3).
The technology perspective, obviously, must go beyond separate innovations to consider how they’ll be utilised. Any of these technologies can be implemented in different ways, with differing degrees of human involvement:
A skilled AI implementation team will understand that each technology has different uses, with distinct capabilities, benefits and weaknesses. The list of possible use cases and available technologies will continue to expand and grow in significance. (See AI-augmented governmentfor a primer on AI technologies and their deployment in government.)
The focus questions for the technology strategy have been answered when the organisation knows where to concentrate its investments, in terms of problems and processes, with a level of detail that seems appropriate to the decision-makers. As with all strategies, more detail will be needed as the strategy is translated and further developed within the organisation.
Success: How will AI deployment create value?
It is likely that the most transformative AI-enabled capabilities will arise from experiments at the “forward edge,” that is, discovered by the users themselves in contexts far removed from centralised offices and laboratories. Taking advantage of this concept of decentralised development and experimentation will require the department to put in place key building blocks and platforms to scale and democratise access to AI. This includes creating a common foundation of shared data, reusable tools, frameworks and standards, and cloud and edge services..
—US Department of Defense AI strategy19
AI-based technologies can create such wide-ranging impacts in so many fields and at so many levels that it can be difficult to imagine how it would change our own work environments. Next to budget constraints, “lack of conceptual understanding about AI (e.g., its proposed value to mission)” was the second-most common barrier to AI implementation cited by respondents in a 2019 NextGov study.20
That’s why an AI strategy should articulate its value both to the enterprise in general, including the workforce and to those affected by its mission, which includes taxpayers as well as public and private stakeholders, reflecting a clear alignment with broader goals, strategies and policies. Proactive communication regarding the value that AI creates for an agency and those it serves should help to allay fears of misuse and enhance trust.
It may help to focus on a few transformative capabilities to start. The most common starting points for AI value creation in government today are typically rapid data analysis, intelligent automation and predictive analytics.
Data analysis. AI feeds on data as whales consume krill: by the tonne, in real time and in big gulps. AI-based systems can make sense of large amounts of data quickly, which translates into reduced wait times, fewer errors and faster emergency responses. It can also mine deeper insights, identifying underserved populations and creating better customer experiences.21
Intelligent automation. AI can speed up existing tasks and perform jobs beyond human ability. Robert Cardillo, former director of the National Geospatial-Intelligence Agency, estimated that without AI, his agency would have needed to hire more than 8 million imagery analysts by 2037, simply to keep up with the flow of satellite intelligence.22
Predictive analytics. Police in Durham, North Carolina, have used NLP to spot otherwise-hidden crime trends and correlations in reports and records, allowing for better predictions and quicker interventions. The effort contributed to a 39 per cent drop in violent crime in Durham between 2007 and 2014.23
When explaining how AI creates value, it’s imperative to include the perspectives of the organisation and its workforce. The benefits to the organisation are relatively easy to articulate—process automation improves speed and quality; analytics identify investment priorities in areas of utmost need; and predictive tools deliver completely new insights and services.
For the workforce, however, the argument is more nuanced. Staff members must be reassured that AI won’t eliminate their jobs. The strategy should show, as applicable, how AI-based tools such as language translation can enhance their existing roles. Freeing employees from mechanical tasks in favour of more creative, problem-solving, people-facing work creates more value for constituents and can greatly enhance job satisfaction.
In an age of electronic warfare, for example, US Army officers are constantly collecting, analysing and classifying unknown radio frequency signals, any which may come from allies, malicious, actors or random sources. How can they manage this overload? The Army created a challenge that drew some 150 teams from industry, research organisations and universities. Models offered by winning teams, based on machine learning, gave the Army a head start in developing a solution to sort through the signal chaos.24 The resulting models have the potential to improve sifting through data with much greater speed and accuracy.25
Governments should build a shared vision of an augmented workplace with the public-sector professionals who will be working alongside these technologies, or incorporating them into their own work. For these professionals, as well as citizens and other stakeholders, this vision must address the ethical considerations of AI applications (see sidebar “Managing ethical issues”) and emphasise its value to the organisation and its mission.
In view of the risks and uncertainties associated with AI, many governments are developing and implementing regulatory and ethical frameworks for its implementation.26 The US Department of Defense (DoD), for example, is planning to hire an AI ethicist to guide its development and deployment of AI.27 Other methods being used to meet this challenge include:
Creating privacy and ethics frameworks. Many governments are formalising their own approach to these risks; the United Kingdom has published an ethics framework to clarify how public entities should treat their data.28
Developing AI toolkits. The toolkit is a collection of tools, guidelines and principles that helps AI developers consider ethical implications as they develop algorithms for governments. Dubai’s toolkit includes a self-assessment tool that evaluates its AI systems against the city’s ethics standards.29
Mitigating risk and bias. Risk and bias can be diminished by encouraging diversity and inclusion in design teams. Agencies also should train developers, data scientists and data architects on the importance of ethics relating to AI applications.30 To reduce historical biases in data, it’s important to use training datasets that are diverse in terms of race, gender, ethnicity and nationality.31 Tools for detecting and correcting bias are evolving rapidly.
Guaranteeing transparency. Agencies should emphasise the creation of algorithms that can enhance transparency and increase trust in those affected. According to a programme note by the Defence Advanced Research Projects Agency, “Explainable AI … will be essential if future warfighters are to understand, appropriately trust and effectively manage an emerging generation of artificially intelligent machine partners.” 32
After identifying goals for AI, the next step is deciding how to scale it across the organisation. The architects of the scaling strategy should always keep in mind the relatively low historical success rate for many large-scale government technology projects.33
Moving AI from pilot to production isn’t a simple matter of installation. It will involve new challenges, both technical and managerial, involving the ongoing cleaning and maintenance of data; integration with a range of systems, platforms and processes; providing employees with training and experience; and, perhaps, changing the entire operating model of some parts of the organisation. Our advice: Break the process into steps or pieces, each clearly articulated to increase the likelihood of success. Test with pilot programmes before scaling. And don’t neglect the “capabilities” element of strategy, which, when coupled with an accurate assessment of today’s AI readiness, can identify gaps in current technical and talent resources.
The Central Intelligence Agency’s former director of digital futures, Teresa Smetzer, urged her agency to “start small with incubation, do proofs of concept, evaluate multiple technologies [and] multiple approaches. Learn from that and then expand on that.”34
The architects must also decide how to measure the value of proposed AI solutions. And if a potential value has been identified, how will it be tracked? On deployment, for example, or usage? The ability to accurately identify value—and plan deployment, sets of metrics and expectations—will depend, in part, on the maturity and complexity of the AI technology and application. It will also be important to explicitly define performance standards in terms of accuracy, explainability, transparency and bias.
Capabilities: What do we need to execute our AI strategy?
The intelligence community must develop a more technologically sophisticated and enterprise aware workforce. We must:
—US Office of the Director of National Intelligence35
An agency can have the right roadmap, technology and funding for an AI programme and still fail at execution. Different capabilities can help ensure success.
One of the most fundamental questions government leaders must consider is, “Who can build an AI system?” In a Deloitte survey, about 70 per cent of public-sector respondents identified a skills gap in meeting the needs of AI projects.36 While the private sector can often attract the right data science talent with competitive remuneration, the public sector faces the constraint of relatively limited salary ranges.37
Of course, government work has its own attractions. Many professionals, for example, say they want work that is meaningful and improves the world around them.38 Agency recruiters should emphasise the vital problems they’re fighting and the opportunities to serve society that they’re pursuing through the promise of AI.39
But the constant talent challenge needn’t derail an AI strategy. When recruiting falls short, agencies can acquire expertise in other ways, such as:
Agencies should consider whether they will develop AI talent capabilities internally, hire contractors, or use both. On the technical side, agencies must decide whether AI software can be developed in-house, purchased off-the-shelf, or commissioned as custom code. These issues require careful consideration from agency leaders and will vary depending on the agency, its level of ambition and the maturity of the technology in question.
Just as human capabilities are essential to a winning AI strategy, so are the appropriate architectures, infrastructures, data integration and interoperability abilities. Agencies should test and modify infrastructure before rolling out solutions, of course, but also determine whether their existing data centre can manage the expected AI workload. Often the answer is “yes” for a simple proof of concept, but “no” for a production solution. This also raises the issue of how data and cloud strategy will come into play. For example:
Cloud and data strategies are essentially universal issues for AI deployment. Others, such as necessary organisational changes or the need to modernise individual systems, will be specific to each agency and the ambition, applications and value it pursues.
Creating the management systems
The DoD will identify and implement new organisational approaches, establish key AI building blocks and standards, develop and attract AI talent, and introduce new operational models that will enable DoD to take advantage of AI systematically at enterprise scale.
—US Department of Defence48
Some leaders breathe a sigh of relief once their roadmap has the necessary AI technologies, processes, capabilities and funding—but that’s often part of the reason it fails. Not enough attention is typically focussed on successfully developing management systems to validate specific initiatives (including cost/benefit analyses), evangelise the project to stakeholders and employees, scale projects from pilot to implementation and track performance.
Good change management and governance protocols are important. AI is likely to be disruptive to organisations and governments both due to its novelty and its potential complexity. Reviewing all these protocols is beyond the scope of this paper, although we’ve already alluded to the importance of building a compelling story and value proposition for the workforce and other stakeholders. For governance, we touched upon the need to establish and track performance measures—and to revisit them regularly.
This section focuses on some of the emerging structures and systems needed to bring AI strategies to fruition.
One common best practise for a complex technology project is to establish a centre of excellence (CoE) or “hub”. Centres of excellence are communities of specialists built around a topic or technology that develop best practises, build use-case solutions, provide training and share resources and knowledge.
CoEs draw technology and business stakeholders together for a common purpose and they in effect partner to identify and prioritise use cases, develop solutions, create innovations and share knowledge. For example, DoD chief date officer Michael Conlin established the Joint Artificial Intelligence Centre (JAIC) with the overarching goal of accelerating the delivery of AIenabled capabilities, scaling AI departmentwide and synchronising DoD AI activities to expand Joint Force advantages.49 Similarly, the Department of Veterans Affairs’ first director of artificial intelligence, Dr. Gil Alterovitz, is leading an effort to connect the department with AI experts in academia and industry.50
JAIC, again, is more than a simple CoE; it’s a critical element of AI governance. Governance challenges must be addressed if change management is to be effective as AI is adopted across mission areas and back-office operations. And since data-sharing is a vital element in achieving AI’s full impact, explicit governance models must address when and how data will be shared and how they will be protected. But these challenges might be typical of any digital project; AI brings its own unique governance challenges.
Government agencies increasingly use algorithms to make decisions—to assess the risk of crime, allocate energy resources, choose the right jobs for the unemployed and determine whether a person is eligible for benefits.51 To instil confidence and trust in AI-based systems, well-defined governance structures must explain how algorithms work and tackle issues of bias and discrimination.52
Several nations are establishing coordinating agencies and working groups to govern AI. Among these is the United Kingdom’s Centre for Data Ethics and Innovation, which will advise the government on how data-driven technologies such as AI should be governed, help regulators support responsible innovation and build a trustworthy system of governance.53 Singapore has published an AI governance framework to help organisations align internal structures and policies in a way that encourages the development of AI in a fair, transparent and explainable manner.54
Another challenge is thinking through how to advance the many pieces of a project from design through execution. One common trap is “pilot purgatory”, in which projects fail to scale. The reasons for this are varied, but often autumn into one of these categories:
One way to avoid such potholes is to borrow from the studies of behavioural scientists and build quick wins into the process. At the start of your AI journey, focus resources on quick payoffs rather than more lengthy transformative projects. They can be relatively easy to execute and generate high mission value. Further, before launching pilots, agencies should prioritise business issues and subject potential AI solutions to a cost-benefit analysis. Then pilots can be launched to see if the hypothetical benefits can be achieved.
For public agencies, processes that integrate AI within existing computational infrastructures remain a challenge.55 While solutions vary, four areas often require significant new systems, structures, or leadership. An AI strategy should be sure to carefully consider whether changes will be required in:
Data is often stored in a variety of formats, in multiple data centres and in duplicate copies. If federal information isn’t current, complete, consistent and accurate, AI might make erroneous or biassed decisions. All agencies should ensure their data is of high quality and that their AI systems have been trained, tested and refined.59
JAIC is partnering with the National Security Agency, the US Cyber Command and the DoD’s cybersecurity vendors to streamline and standardise data collection as a foundation for AI-based cyber tools. This standardised data can be used to create algorithms that can detect cyberattacks, map networks and monitor user activity.60
As with the introduction of electricity at the beginning of the 20th century and the internet more recently, AI can fundamentally alter how we live and work. As key developers and users of AI-based systems, government agencies have a special responsibility to consider not only how it can be employed to make work more productive and innovative, but also to think about its potential effects, positive and negative, on society as a whole.
Agencies should begin the AI journey with a strategy that intertwines technology capabilities with mission objectives; creates a clear methodology for decision-making; sets guardrails and objectives for implementation; and emphasises transparency, accountability and collaboration. Government leaders are already taking action on AI investments. A clear, holistic strategy will help them ensure their AI programmes are more likely to achieve their potential for the mission, the workforce, and citizens today and tomorrow.
The Deloitte Center for Government Insights shares inspiring stories of government innovation, looking at what’s behind the adoption of new technologies and management practises. We produce cutting-edge research that guides public officials without burying them in jargon and minutiae, crystallising essential insights in an easy-to-absorb format. Through research, forums and immersive workshops, our goal is to provide public officials, policy professionals and members of the media with fresh insights that advance an understanding of what is possible in government transformation.