Smarter together: Why artificial intelligence needs human-centered design

Deloitte Review, issue 22

Jim Guszcza

United States

Jim Guszcza

United States

Endnotes

    1. See Terry Winograd, “Thinking machines: Can there be? Are we?,” originally published in James Sheehan and Morton Sosna, The Boundaries of Humanity: Humans, Animals, Machines (Berkeley: The University of California Press, 1991). View in article

    2. See, for example, Matthew Hutson, “Even artificial intelligence can acquire biases against race and gender,” Science, April 13, 2017; “Fake news: You ain’t seen nothing yet,” Economist, July 1, 2017; Paul Lewis, “‘Our minds can be hijacked’: The tech insiders who fear a smartphone dystopia,” Guardian, October 6, 2017; Holly B. Shakya and Nicholas A. Christakis, “A new, more rigorous study confirms: The more you use Facebook, the worse you feel,” Harvard Business Review, April 10, 2017. View in article

    3. The prominent AI researcher Yann LeCun discusses AI hype and reality in an interview with James Vincent, “Facebook’s head of AI wants us to stop using the Terminator to talk about AI,” The Verge, October 26, 2017. View in article

    4. For further discussion of this theme, see James Guszcza, Harvey Lewis, and Peter Evans-Greenwood, “Cognitive collaboration: Why humans and computers think better together,” Deloitte University Press, January 23, 2017. View in article

    5. See Don Norman, The Design of Everyday Things (New York: Basic Books, 2013). View in article

    6. See Kris Hammond, “What is artificial intelligence?,” Computerworld, April 10, 2015. View in article

    7. See Lukas Biewald, “Why human-in-the-loop computing is the future of machine learning,” Computerworld, November 13, 2015. View in article

    8. Bing search performed on October 11, 2017. This is the result of work led by cognitive scientists Dan Goldstein and Jake Hoffman on helping people better grasp large numbers (see Elizabeth Landau, “How to understand extreme numbers,” Nautilus, February 17, 2017). View in article

    9. I owe this point to Harvey Lewis. For a discussion, see Tim Harford, “Crash: How computers are setting us up for disaster,” Guardian, October 11, 2016. View in article

    10. See Antonio Regalado, “The biggest technology failures of 2016,” MIT Technology Review, December 27, 2016. View in article

    11. Many examples of algorithmic bias are discussed in April Glaser, “Who trained your AI?,” Slate, October 24, 2017. View in article

    12. See the references in the first endnote. In their recent book The Distracted Mind: Ancient Brains in a High-Tech World (MIT Press, 2016), Adam Gazzaley and Larry Rosen explore the evolutionary and neuroscientific reasons why we are “wired for” being distracted by digital technology, and explore the cognitive and behavioral interventions to promote healthier relationships with technology. View in article

    13. #Republic by Cass R. Sunstein (New Jersey: Princeton University Press, 2017). View in article

    14. See John Seely Brown, “Cultivating the entrepreneurial learner in the 21st century,” March 22, 2015. Available at www.johnseelybrown.com. View in article

    15. See Daniel Kahneman, Thinking, Fast and Slow (Farrar, Straus and Giroux, 2011). View in article

    16. See Richard Thaler and Cass Sunstein, “Who’s on first: A review of Michael Lewis’s ‘Moneyball: The Art of Winning an Unfair Game,’” University of Chicago Law School website, September 1, 2003. View in article

    17. See Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey, “Algorithm aversion: People erroneously avoid algorithms after seeing them err,” Journal of Experimental Psychology, 2014. In recent work, Dietvorst and Massey have found that humans are more likely to embrace algorithmic decision-making if they are given even a small amount of latitude for occasionally overriding them. View in article

    18. Garry Kasparov, “The chess master and the computer,” New York Review of Books, February 11, 2010. View in article

    19. This analogy is not accidental. Steve Jobs described computers as “bicycles for the mind” in the 1990 documentary Memory & Imagination: New Pathways to the Library of Congress. Clip available at “Steve Jobs, ‘Computers are like a bicycle for our minds.’ - Michael Lawrence Films,” YouTube video, 1:39, posted by Michael Lawrence, June 1, 2006. View in article

    20. This is discussed by Philip Tetlock in Superforecasting: The Art and Science of Prediction (Portland: Broadway Books, 2015). View in article

    21. See Gartner, “Gartner says business intelligence and analytics leaders must focus on mindsets and culture to kick start advanced analytics," press release, September 15, 2015. View in article

    22. See Viktor Schönberger and Kenneth Cukier, “Big data in the big apple,” Slate, March 6, 2013. View in article

    23. Choice architecture can be viewed as the human-centered design of choice environments. The goal is to make it easy for us to make better decisions through automatic “System 1” thinking, rather than through willpower or effortful System 2 thinking. For example, if a person is automatically enrolled in a retirement savings plan but able to opt out, they are considerably more likely to save more for retirement than if the default is set to not-enrolled. From the perspective of classical economics, such minor details in the way information is provided or choices are arranged—Richard Thaler, Nobelist and the co-author of Nudge: Improving Decisions About Health, Wealth and Happiness (Yale University Press, 2008) calls them “supposedly irrelevant factors”—should have little or no effect on people’s decisions. Yet the core insight of choice architecture is that the long list of systematic human cognitive and behavioral quirks can be used as design elements for choice environments that make it easy and natural for people to make the choices they would make if they had unlimited willpower and cognitive resources. In his book Misbehaving (W. W. Norton & Company, 2015), Thaler acknowledged the influence of Don Norman’s approach to human-centered design in the writing of Nudge. View in article

    24. When the population-level base rate of an event is low, and the test or algorithm used to flag the event is imperfect, false positives can outnumber true positives. For example, suppose 2 percent of the population has a rare disease and the test used to identify it is 95 percent accurate. If a specific patient from this population tests positive, that person has a roughly 29 percent chance of having the disease. This results from a simple application of Bayes’ Theorem. A well-known cognitive bias is “base rate neglect”—many people would assume that the chance of having the disease is not 29 percent but 95 percent. View in article

    25. For more details, see Joy Forehand and Michael Greene, “Nudging New Mexico: Kindling compliance among unemployment claimants,” Deloitte Review 18, January 2016. View in article

    26. This theme is explored in Jim Guszcza, David Schweidel, and Shantanu Dutta, “The personalized and the personal: Socially responsible uses of big data,” Deloitte Review 14, January 2014. View in article

    27. See Mitesh Patel, David Asch, and Kevin Volpp, “Wearable devices as facilitators, not drivers, of health behavior change,”  Journal of the American Medical Association 313, no. 5 (2015). View in article

    28. For further discussion of the themes in this section, see James Guszcza, “The last-mile problem: How data science and behavioral science can work together,” Deloitte Review 16, January 2015. View in article

    29. McCarthy defined artificial intelligence as “the science and engineering of making intelligent machines, especially intelligent computer programs,” and defined intelligence as “the computational part of the ability to achieve goals in the world.” He noted that “varying kinds and degrees of intelligence occur in people, many animals, and some machines.” See John McCarthy, “What is artificial intelligence?,” Stanford University website, accessed October 7, 2017. View in article

    30. The original proposal can be found in John McCarthy, Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon, “A proposal for the Dartmouth Summer Research Project on artificial intelligence,” AI Magazine 27, no. 4 (2006). View in article

    31. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification,” arXiv, February 6, 2015. View in article

    32. It is common for such algorithms to fail in certain ambiguous cases that can be correctly labeled by human experts. These new data points can then be used to retrain the models, resulting in improved accuracy. This virtuous cycle of human labeling and machine learning is called “human-in-the-loop computing.” See, for example, Biewald, “Why human-in-the-loop computing is the future of machine learning.” View in article

    View in Article

Acknowledgement

Cover image by: Barry Downard