Skip to main content

The future of intelligence analysis

A task-level view of the impact of artificial intelligence on intel analysis

Kwasi Mitchell, PhD
Joe Mariani
Adam Routh, PhD.
Akash Keyal
Alex Mirkow

How will artificial intelligence impact intel analysis and, specifically, the intelligence community workforce? Learn what organizations can do to integrate AI most effectively and play to the strengths of humans and machines.

The future is already here

IN the last decade, artificial intelligence (AI) has progressed from near-science fiction to common reality across a range of business applications. In intelligence analysis, AI is already being deployed to label imagery and sort through vast troves of data, helping humans see the signal in the noise.1 But what the intelligence community (IC) is now doing with AI is only a glimpse of what is to come. These early applications point to a future in which smartly deployed AI will supercharge analysts’ ability to extract value from information.

The adoption of AI has been driven not only by increased computational power and new algorithms but also the explosion of data now available. By 2020, the World Economic Forum expects there to be 40 times more bytes of digital data than there are stars in the observable universe.2 For intelligence analysts, that proliferation of data means surefire information overload. Human analysts simply cannot cope with that much data. They need help.

Intelligence leaders know that AI can help cope with this data deluge but they may also wonder what impact AI will have on their work and workforce. According to surveys of private sector companies, there is a significant gap between the introduction of AI and understanding its impact. Nearly 20 percent of workers report experiencing a change in roles, tasks, or ways of working as a result of implementing AI, yet nearly 50 percent of companies have not measured how workers are being impacted by AI implementation.3 This article begins to tackle those questions, offering a tasks-level look at how AI may change work for intel analysts. It will also offer ideas for organizations seeking to speed adoption rates and move from pilots to full scale. AI is already here; let’s see how it will shape the future of intelligence analysis.

What do we mean by AI?

The term “artificial intelligence” can mean a huge variety of things depending on the context. To help leaders understand such a wide landscape, it is helpful to distinguish between the types of model classes of AI, and the applications of AI. The first are the classifications based on how AI works; the second is based on what tasks AI is set to do.

Show more

AI in the intel cycle

Intelligence flows through a five-step “cycle” carried out by specialists, analysts, and management across the IC: planning and direction; collection; processing; analysis and production; and dissemination (figure 2). The value of outputs throughout the cycle, including the finished intelligence that analysts put into the hands of decision-makers, is shaped to an important degree by the technology and processes used, including those that leverage AI.

Technologies such as unmanned aerial systems, remote sensors, advanced reconnaissance airplanes, the internet, computers, and other systems have supercharged the collection process to such an extent that analysts often have more data than they can process.4 Complicating matters, the data collected often resides in different systems and comes in different mediums, requiring analysts to spend time piecing together related information—or fusing data—before deeper analysis can begin.

Access to more data should be a good thing. But without the ability to fuse and process it, data can inundate analysts with mountains of incoherent pieces of information. The director of the National Geospatial Intelligence Agency said that if trends hold, intelligence organizations could soon need more than 8 million imagery analysts alone, which is more than five times the total number of people with top secret clearances in all of government.5 In the modern digitized age, where success in warfare depends on a nation’s ability to analyze information faster and more accurately than adversaries, data cannot go unanalyzed.6 But given the pace at which humans operate, there simply isn’t enough time to make sense of all the data and perform the other necessary intelligence cycle tasks.

AI can provide much-needed support. Intelligence agencies are already using AI’s power to sort through volumes of data to pull out critical “knowns” for further analysis. For example, agencies have used AI to automatically identify and label patterns of vehicles to identify SA-21 surface-to-air missile batteries or sift through millions of financial transactions to identify patterns consistent with illicit weapons smuggling. Similarly, the Joint Artificial Intelligence Center (the Department of Defense’s focal point for AI) is already working to develop products across “operations intelligence fusion, joint all-domain command and control, accelerated sensor-to-shooter timelines, autonomous and swarming systems, target development, and operations center workflows.”

Our analysis suggests AI operating in these capacities can save analysts’ time and enhance output. While exact time savings will depend on the type of work performed, an all source analyst who has the support of AI-enabled systems could save as much as 364 hours or more than 45 working days a year (figure 3). These savings can free up analysts to devote more time to higher-priority tasks or build skills through additional training, among other activities. (For more information on our methodology, see Appendix.)

The real value of AI

The benefits of AI, however, can go far beyond time savings. After all, intelligence work never ends; there is always another problem that demands attention. So saving time with AI will not reduce the workforce or trim intelligence budgets. Rather, the greater value of AI comes from what might be termed an “automation dividend”: the better ways analysts can use their time after these technologies lighten their workload.

Indeed, research on industries from banking to logistics shows that the greatest benefit of automation comes from when human workers use technology to “move up the value chain.”7 Put another way, they spend more time performing tasks that have greater benefit to the organization and/or customer. For example, when automation freed supply chain workers from tasks such as measuring stock or filling in order forms, they could redeploy that time to create new value by matching specific customer needs to supplier capabilities.8 For intelligence analysis, leveraging AI to instantly pull otherwise hard-to-spot indications and warning (I&W) leads out of messy data could allow human analysts to do the higher-value work of determining if a given I&W lead represents a valid threat.

There are two main ways to create additional value with extra time: Analysts can spend more time on higher-value tasks that they already do, or they can add new high-value tasks.

Do more: Humans focus on human tasks

Before these benefits can be realized, however, intelligence organizations must determine which are the highest-value tasks, and therefore, the best-suited for human workers to perform. To start, let’s compare humans with computers or other machines.

The key is in understanding the difference between specialized intelligence and general intelligence. Even a simple pocket calculator can outperform the best math whiz at some tasks. But while it is fast and accurate, arithmetic is the only task the pocket calculator can perform. It has a very narrow, specialized intelligence. Humans, on the other hand, tend to outperform even the most advanced computers in general intelligence. As MIT professor Thomas Malone explains, “Even a five-year-old child has more general intelligence than the most advanced computer programs today. A child can carry on a much more sensible conversation about a much wider range of topics than any computer program today, and operate more effectively in an unpredictable physical environment.”9

So while machines are better than humans at handling large volumes of data or working to extreme levels of precision, humans are better at tasks that change dramatically with context or those that involve high levels of interpersonal interaction. Teamed together, human workers and AI tools can each play to their strengths; AI tackles the huge volumes of data and humans deal with the highly variable tasks. Inside intelligence organizations, human analysts can move up the value chain by offloading many of their data-heavy processing- and exploitation-related tasks onto machines. They can then put more of their own energy into the analysis, planning, and direction tasks that often require more creativity, communication, and collaboration with colleagues and decision-makers.

Our model (see “Appendix: Methodology”) makes similar predictions for intel analysts. With AI taking on tasks such as data cleaning, labeling, or pattern recognition, all source analysts can spend more time on context-sensitive or uniquely human tasks. As a result, future analysts will likely spend more time collaborating with others—up to 58 percent more than they do today.

How could greater collaboration play out across the intel cycle? As an example, in the dissemination stage, analysts present information to decision-makers, collaborating with them so they can make the best decisions. If AI could take on much of the prep work in assembling sources, creating graphics, or even drafting reports, human analysts could focus on the needs of the decision-maker and the implications of the situation. In this scenario, an analyst would simply provide AI with the topic of an upcoming briefing or finished product. From there, AI could automatically generate a list of relevant reports to read through, preselect maps or imagery, label the relevant features for a briefing, and even write short summaries of background events.

A similar shift is already taking place in journalism. AI is being used to automatically generate simple news stories.10 In its first year, the Washington Post’s bot published 850 articles on everything from the Olympics to elections. By automating detail-oriented tasks such as writing corporate earnings reports, the AP found that use of bots reduced journalists’ workload by 20 percent, allowing them to focus on reducing errors and spotting larger trends.11 As a result, even as output increased, there were fewer errors in corporate earnings stories. Intel analysts could benefit from a similar arrangement: AI could generate routine intelligence summaries or daily reports, allowing analysts to focus on synthesizing those reports into larger trends or customizing reports to the preferences of specific decision-makers.

Do something new: Explore the possibilities

As we have all experienced, new technologies can come with new tasks. So AI most likely will also introduce entirely new tasks for workers to handle. Using the adoption of other advanced technologies as a guide, we expect many of the new tasks will likely fall into one of these three categories:

  • Delivering new models. Intelligence is fundamentally about using information to reduce uncertainty for national leaders. The rapid pace of modern decision-making is among the biggest challenges leaders face.12 AI can add value by helping provide new ways to more quickly and effectively deliver information to decision-makers. One such idea is a shift toward real-time decision support. In the past, complex models of adversary behavior would take months to create and update, leading to a long cycle of formal intelligence products. Today, using AI and big data, analysis can take place much faster, often just short of real time. This is now happening in auto racing, where Formula One race teams adjust strategy models based on thousands of data points as cars are racing around the track.13 A sudden shift in weather or an unexpected pit stop from a rival can trigger changes to a team’s plan in seconds. Following the Formula One racing example, intelligence analysts with AI-infused models that could simulate even complex scenarios quickly would be able to answer questions from decision-makers as they ask them rather than waiting for finished intel products. Our model suggests that analysts could spend up to 39 percent more time advising decision-makers in this manner following the adoption of AI at scale (figure 3).
  • Developing people. A motivated and informed workforce is a more productive workforce. Tasks that improve employees’ well-being or performance are likely to create significant new value for any organization. In intelligence, to perform at their best, analysts need opportunities to learn and grow. They need to keep abreast of new technologies, new services, and new happenings across the globe—not just in annual training sessions, but continuously. AI could help bring continuous learning to the widest scale possible by recommending courseware based on what analysts are reading or writing about in their daily work. For an analyst researching Chinese fifth-generation fighter development, AI could recommend he or she complete a short training on quantum RADAR or read a history of Chinese aviationAI could also recommend when analysts need to take breaks or change tasks to keep fresh.
  • Maintaining the tech itself. From the steam engine to computers, new technologies need maintenance and AI is likely to be no different. One significant challenge to using AI effectively in high-stakes situations such as intelligence work is having confidence in the outputs of AI models. Beyond just following up on AI-generated leads, organizations will likely also need to maintain AI tools and to validate their outputs so that analysts can have confidence when using them. In medicine, where AI is beginning to be applied to diagnostic tools such as MRI imagery, validating the output of AI models against known benchmarks is becoming a common new task for hospital staff.14 Much of this validation can be performed as AI tools are designed or training data is selected. But while cancer isn’t trying to deny or deceive doctors, foreign actors may attempt to use adversarial examples to fool AI used in intelligence. This means that validation will need to be a continuous task not only for analysts but for IT staff as well.15

Avoiding pitfalls

The fact that AI could require new tasks just to make sure it is operating correctly does highlight a potential danger: AI could eat up more time than it gives to analysts. And given that AI brings so much change, organizations adopting AI at scale will experience some level of friction. You simply cannot change the tasks of 20 percent of your workforce or add weeks’ worth of new tasks without straining staff, business processes, and existing tools. Intelligence organizations that want to get the best from AI need to recognize the pitfalls and find ways to mitigate them.

New tech as a time sink

Perhaps the most significant pitfall is the possibility that, rather than creating new value, AI ends up monopolizing analysts’ time. Such situations have emerged before, such as with the health care industry’s implementation of electronic health records (EHR). While EHR promised to reduce health care professionals’ workloads, recent research has shown that EHR has, in fact, increased the amount of time it takes doctors to document patient visits.16 Doctors using EHR spend more time typing during patient visits, which reduces the amount of face-to-face time they have with patients. Overall, this drop in interaction has fed negative perceptions among both patients and doctors.17

Interestingly though, the EHR example can help intelligence organizations avoid this pitfall. While doctors spend more time documenting in EHR than with paper notes, nurses and clerical staff actually experience significant time savings in their tasks. So EHR causing doctors to spend more time is not necessarily a failure of the technology; rather, it reflects the strategic priorities of the organization, essentially shifting some billing and clerical workload from staff to physicians.18 If we are unhappy with the outcome, it is not the technology’s fault. Rather, it reflects a need to reevaluate the business and technical strategies that led to it.

If intelligence organizations are to avoid similar issues with the adoption of AI at scale, they must be clear about their priorities and how AI fits within their overall strategy. An organization focused on increasing productivity will pursue very different AI tools from one looking to improve the accuracy of analytical judgments. AI is not the solution to every problem, and having a clear vision about its value can help ensure it is applied to the right problems. Having clarity about the goals of an AI tool can also help leaders communicate their vision for AI to the workforce and alleviate feelings of mistrust or uncertainty about how the tools will be used.

Second, intelligence organizations should avoid investing in “empty technology,”—using AI without having access to the data it needs to be successful. AI is something like a flour mill: Without the grain to feed it, it is not going to produce much value. Even the most advanced AI tool will have limited utility if it lacks effective training data or sufficient input data. Without the right data, AI tools can still eat up time as analysts attempt to use them, but their outputs will be of limited utility. The result will be frustrated analysts who view AI as a waste of their limited time.

Analyst mistrust

Analysts’ perceptions are critically important to the successful at-scale adoption of AI. Survey results suggest that analysts are most skeptical of AI, compared to technical staff, management, or executives.19 And as seen above, if the workforce does not see the value in a tool, it will be unlikely to use it.

To overcome this skepticism and get the most from AI, management will need to focus on educating the workforce and reconfiguring business processes to seamlessly integrate the tools into workflows. Without these steps, AI can just be a costly afterthought. For example, one federal agency implemented an AI pilot to generate leads for its investigators to follow up. However, the investigators were also simultaneously generating their own leads. With limited time for follow-up, the investigators naturally prioritized the leads they had come up with themselves and rarely used the leads generated by AI.20

Overcoming analysts’ initial doubts about a given AI tool comes down to creating trust between the analysts and the tool. Because they must stand behind their assessments even when powerful people may disagree, analysts harbor an understandable reluctance to put faith in something they cannot explain and defend. Having an interface that allowed the analyst to easily scan the data underpinning a simulated outcome, for example, or to view a representation of how the model came to its conclusion, would go a long way toward that analyst incorporating the technology as part and parcel of his or her workflow. This would allow for much more reliable, trusted data, and would yield more reliable analysis being presented to war fighters and decision-makers.

While having a workforce that lacks confidence in AI’s outputs can be a problem, the opposite may also turn out to be a critical challenge. For many decades, intelligence leaders have been aware of the phenomenon where adding data to an analyst’s judgments increases the analyst’s confidence that they are right without actually improving the work’s overall accuracy.21 In other words, more data played into analysts’ confirmation bias—they used the new evidence to support their preconceived conclusions instead of helping create more accurate analysis.

The psychology experiments that lie at the heart of that observation were done using two to five times additional data. AI would make orders of magnitude more data available to analysts, possibly exacerbating analysts’ confirmation bias. For example, in the financial services industry, early experience shows that AI can provide analysts with roughly 30 times the amount of data available today.22 It is simply unknown how human cognition will respond to such an unprecedented volume of data. Analysts could become less confident in AI judgments due to information overload. Or, conversely, with so much data at their disposal, analysts could become overconfident, implicitly trusting the AI. The latter could be especially dangerous: Many aviation accidents have shown that mismatch between human trust in automation and human understanding and supervision of it can lead to tragedies.23

Conversely, there are promising ways in which AI could actually help analysts combat confirmation bias and other human cognitive limitations. For instance, AI could be given tasks that help check the validity of assessments that humans struggle to find time for or are burdensome to do manually. Machines would be very good at continuously conducting key assumptions checks, analyses of competing hypotheses, and quality of information checks.24 Senior analytic managers could also leverage AI to alert them to mismatches between evidence coming in and their teams’ assessments, giving them an opportunity to direct analytic line reviews and focus their attention on problem areas.

In the end, the impact AI may have on the cognitive biases of analysts is simply not known. Leaders need to pay careful attention to analysts’ concerns, evaluate business process design, and continuously monitor AI performance to help prevent any potential pitfalls.

AI tradecraft: How to get started today

The greatest benefits of AI will be achieved when, like electrification, it is embedded into every aspect of an organization’s operation and strategy.25 For all the game-changing benefits that AI can bring at scale, or the organization-shaking pitfalls, the immediate steps to getting started can be surprisingly familiar.

Across a government agency or organization, successful adoption at scale would require leaders to harmonize strategy, organizational culture, and business processes. If any of those efforts are misaligned, AI tools could be rejected or could fail to create the desired value. Leaders need to be upfront about their goals for AI projects, ensure those goals support overall strategy, and pass that guidance on to technology designers and managers to ensure it is worked into the tools and business processes.

Establishing a clear AI strategy can also help organizations apply AI to tackle a variety of problems, from mission-facing to back-office. Such a strategy can frame decisions about what infrastructure and partners are necessary to access the right AI tools for an organization. With 83 percent of enterprise AI in the cloud, organizations can find it easier to develop AI tools in-house, purchase from external vendors, or even find an existing solution already in use elsewhere in the cloud.26

At a division or team level, the first steps shift from strategic alignment to analyst adoption. Tackling some of the significant nonanalytical challenges analyst teams face could be a palatable way to introduce AI to analysts and build their confidence in it. Today, analysts are inundated with a variety of tasks, each of which demands different skills, background knowledge, and the ability to communicate with decision-makers. For any manager, assigning these tasks across a team of analysts without overloading any one individual or delaying key products can be daunting. AI could help pair the right analyst to the right task so that analysts can work to their strengths more often, allowing work to get done better and more quickly than before.

Similarly, AI could help managers evaluate performance and screen job applicants for aptitude in a particular skill or even identify all-around stars, much like Special Operations Command is exploring with Marine Raider applicants.27 The benefit to these nonanalytical uses of AI is that when analysts see AI aid them in their work, rather than competing with them, they would likely become more comfortable working with AI as it moves into more analytical tasks.

AI is not coming to intelligence work; it is already here. But the long-term success of AI in the IC depends as much on how the workforce is prepared to receive and use it as any of the 1s and 0s that make it work.

Appendix: Methodology

Our analysis began with Department of Labor O*NET data for the intelligence analysis occupation. However, since the O*NET data for intelligence analysis is based on very few survey responses, we supplemented with similar occupations, such as investigative police officer, to create a list of detailed work activities or “tasks,” that could accurately represent the breadth of intel analysis.

In developing our model, we then broke the tasks into two archetypes to reflect some of the diversity in the type of work intelligence analysts can perform. For each archetype we assigned tasks to different stages of the intel cycle and included rough levels of effort for each task. Next, we calculated the automation potential for each task using the same algorithm from our previous research into the impact of AI on government.28 The calculation considers various factors, including how much social intelligence, creative intelligence, and perception or manipulation each task requires to estimate how automatable the task is.

Tasks that are more amenable to automation will feature time savings, while tasks less suitable to automation may actually see time gains as analysts are able to spend more time on these activities (figure 4). For even more background on our approach, see the analysis from our original report.

How much time AI may save on a particular task is a function of how much interpersonal interaction, creativity, or manual dexterity a task requires. While both tasks involve collaboration, the focus of one is on sharing information, a highly automatable activity, while the focus of the other is on working with others, a less automatable task that requires significant interpersonal interaction.

Show more

Defense, Security, and Justice services

Deloitte offers national security consulting and advisory services to clients across the Department of Homeland Security, the Department of Justice, and the intelligence community. From cyber and logistics to data visualization and mission analytics, personnel, and finance, we bring insights from our client experience and research to drive bold and lasting results in the national security and intelligence sector. People, ideas, technology, and outcomes—all designed for impact.

Learn more

The authors would like to express sincere thanks to Lt. Gen. (retired) Vincent Stewart along with Tim ChaseBev McDonaldMatt JenningsAlex AddyMay Meyers, and Peter Viechnicki of Deloitte Consulting LLP and Derek Pankratz of Deloitte Services LP for their expertise and advice. We owe a special debt to Pankaj Kishnani from Deloitte Support Services India Pvt. Ltd. and, Rebecca KaimMatt MurphyJason Atwell, and Oleg Oleinic of Deloitte Consulting LLP for their great work in the creation of the underlying models used in this research.

Cover image by: Sonya Vasilieff

  1. Russ Travers, “The coming intelligence failure: A blueprint for survival,” Central Intelligence Agency Library, April 14, 2007.

    View in Article
  2. Jeff Desjardins. “How much data is generated each day?,” World Economic Forum, Apr 17, 2019.

    View in Article
  3. Richard Horton et al., Automation with intelligence: Reimagining the organization in the ‘Age of With’, Deloitte Insights, September 6, 2019.

    View in Article
  4. Cortney Weinbaum and John N.T. Shanahan, Intelligence in a data-driven age, National Defense University Press, 2018.

    View in Article
  5. Office of the Director of National Intelligence, The AIM Initiative: A strategy for augmenting intelligence using machines, January 16, 2019; Brian Fung, “5.1 million Americans have security clearances. That is more than the entire population of Norway,” Washington Post, March 25, 2014.

    View in Article
  6. Success in future warfare will be determined by whoever can collect, analyze, disseminate, and act on information the fastest. For more see: Shawn Brimley et al., “Building the future force: Guaranteeing American leadership in a contested environment,” March 29, 2018.

    View in Article
  7. Adam Mussomeli et al., The digital supply network meets the future of work, Deloitte Insights. December 18, 2017.

    View in Article
  8. Ronald Burt, Brokerage and Closure: An Introduction to Social Capital (New York: Oxford University Press, 2005).

    View in Article
  9. Jim Guszcza and Jeff Schwartz. “Superminds: How humans and machines work together,” Deloitte Review 24, January 28, 2019.

    View in Article
  10. Jaclyn Peiser. “The Rise of the Robot Reporter,” New York Times, February 5, 2019; Lucia Moses.

    View in Article
  11. The Washington Post’s robot reporter has published 850 articles in the past year,” DigiDay, September 14, 2017.

    View in Article
  12. Lt. Gen. (retired) Vincent Stewart, former head of Defense Intelligence Agency, correspondence with the authors, September 27, 2019.

    View in Article
  13. Joe Mariani, “Racing the future of production: A conversation with Simon Roberts, operations director of McLaren’s Formula One Team,” Deloitte Review 22, January 22, 2018.

    View in Article
  14. Thomas M. Maddox, John S. Rumsfeld, and Philip R.O. Payne, “Questions for artificial intelligence in health care,” JAMA 321, no. 1 (2019): pp. 31–2.

    View in Article
  15. Ian Goodfellow et al., “Attacking machine learning with adversarial examples,” OpenAI, February 24, 2017.

    View in Article
  16. Lise Poissant et al. “The impact of electronic health records on time efficiency of physicians and nurses: A systematic review,” Journal of the American Medical Informatics Association : JAMIA 2, no. 5 (2005): pp. 505–16, doi:10.1197/jamia.M1700.

    View in Article
  17. Onur Asan, Paul D. Smith, and Enid Montague. “More screen time, less face time—implications for EHR design,” Journal of Evaluation in Clinical Practice 20, no. 6 (December 2014): pp. 896–901.

    View in Article
  18. Pascale Carayon et al., “Implementation of an electronic health records system in a small clinic: The viewpoint of clinic staff,” Behaviour & Information Technology 28, no. 1 (2009): pp. 5–20.

    View in Article
  19. Horton et al., Automation with intelligence.

    View in Article
  20. Deloitte project work for a Federal Security Agency.

    View in Article
  21. Richards Heuer, The Psychology of Intelligence Analysis (Militarybookshop.Co.UK, 2010): pp. 54–5.

    View in Article
  22. The number (30 times) is derived from estimates of 10x overall data growth between 2016 and 2025 (David Reinsel, John Gantz, and John Rydning, Data Age 2025: The digitization of the world, International Data Corporation, November 2018) and an upper bound of how AI could make existing, but unused data within a firm available to decision-makers (Mark Gualtieri. “Hadoop is data’s darling for a reason,” Forrester. January 21, 2016).

    View in Article
  23. Crashes such as Air France flight 447 or Aeroflot flight 593 highlight the real dangers of an overreliance or lack of understanding of automation among workers using those tools.

    View in Article
  24. Richard J. Heuer Jr. and Randolph H. Pherson, Structured Analytic Techniques for Intelligence Analysis (Washington, DC: CQ Press, 2015).

    View in Article
  25. Erik Brynjolfsson, Daniel Rock, and Chad Syverson. “Artificial intelligence and the modern productivity paradox: A clash of expectations and statistics,” NBER Working Paper No. 24001, October 16, 2019.

    View in Article
  26. Jeff Loucks, Artificial intelligence: From expert-only to everywhere: TMT predictions 2019, Deloitte InsightsDecember 11, 2018.

    View in Article
  27. Nicholas Martin. “USSOCOM develops AI tech to evaluate Marine Raider candidates,” Executive Gov. September 27, 2019.

    View in Article
  28. Peter Viechnicki and William Eggers, How much time and money can AI save government?, Deloitte Insights, April 26, 2017.

    View in Article

Did you find this useful?

Thanks for your feedback

If you would like to help improve Deloitte.com further, please complete a 3-minute survey