Having mulled over a brief history of AI, and checked out the various modern approaches and recent developments, we take a look at the potential dangers of AI
Hysteria and misconceptions inevitably follow in the wake of progress. AI has an undeserved image problem and Terminator style scenarios abound. Although such stories can be useful in keeping discussion of potential future ‘pitfalls’ of unchecked development alive, these often possess little to no basis of truth. As Andrew Ng put it, “Worrying about evil AI killer robots today is a little bit like worrying about overpopulation on the planet Mars.”
Current AI systems excel by having a very narrow focus. Many professionals persist in promoting the comparison of algorithmic decisions to human reasoning yet such similarities to human cognition, “are vastly overstated and narrowly construed” according to doctoral candidate David Watson15. This reinforces public fears of apocalyptic intelligent machines in the same way that natural phenomena were attributed to supernatural beings in the past.
Over the last 15 years, from the first DARPA challenge in 200414, we have seen all sort of cars decked in sensors protruding in all directions with massive radar and LiDAR installations competing to navigate a 240 km route autonomously from California to Nevada. The scenes were more reminiscent of the cartoon series “Wacky Races” with cars being side tracked with what would be irrelevant obstacles to a human. Today, a sleek production car with no visible grotesque sensors can handle driving itself (laws permitting) in relatively extreme conditions. Yet, its AI system cannot suddenly predict the weather or approve your bank loan much less suddenly ‘decide’ to kill you or a specific bystander. Although there have been many instances where such a system has been credited with actually preventing an accident151617, horror stories18 are never far behind.
Horror stories stick. No matter how much an AI system is trained, the unpredictability of human nature and often-intractable way a human takes decisions, can often be difficult to capture in a machine. Unpredictable outlier situations can cause failures when rigid learning boundaries are imposed on AI. Quoting Dr Stefan Heck, CEO of Nauto, “Vehicle AI is going to be designed to break the law. Autonomous Vehicles should not and cannot be designed to drive the speed limit—it will have to learn to go over the limit and match speeds of other drivers in order to be implemented wide-scale and keep other drivers safe”. Whilst contentious, such a statement has a ring of truth. Driving is not a set of simple rules. Humans have not yet mastered it and since we operate in the real world, we can often avoid accidents by skirting the strict letter of the law. AI must be able to mimic this cognitive capacity to take such decisions in times of exceptional need. This means that significant improvements are required in how such AI systems are taught. Imitation Learning with ensemble techniques, utilising many differently trained models, can address some of these shortcomings.
No single machine is capable of self-adapting itself to perform outside its design parameters. Certainly, there is no clear path to how any machine can become sentient1. For many doomsday scenarios, a machine would need to exhibit a measure of self-awareness, possibly associated to a sense of self-preservation and then have the means, both programmatic and physical, to act on such feedback to improve its chances of survival. Can a machine, trained to recognise faces and target potential criminals with lethal force, take over? Unlikely. It may malfunction and cause some damage but ultimately it cannot purposefully go beyond what it was designed to do. It will run out of ammunition or simply fuel. Who will provide this? When it breaks down, who will fix it? Only biological entities have any form of regenerative capability.
Run-away AI dangers are a potential reality in an era of ‘Connected AI’. There are no standards for the communication between AI agents. Whilst there is basic information transfer there is no policy exchange or information that could be acted upon for mutual benefit. Such a scenario would call for hosts of AI agents on various machines with each being an expert at doing its own task yet able to interact towards achieving a common goal.
Such a doomsday scenario requires the development of generalised AI platforms or ensembles that can operate across multiple scenarios. Modern militaries already operate the concept of a connected battlefield and the US Army is deploying AI19 to detect and classify threats to its soldiers. Today, it may stop there but imagine an integration of such an AI with automated AI driven defence mechanism that together act as judge, jury and executioner. Give the connected nodes capabilities with no human oversight and program them to maximise benefit to their collective existence and machine sentience or not, then yes we do need to start worrying.
Fortunately, at present, humans are the ultimate and only generalised (natural) Intelligence platforms. An excellent real time face recognition system may be able to identify a person in a crowd yet, try to get this system to recognise between a bird and a cat and it fails miserably. A human can do both albeit lacking the processing performance – as opposed to the cognitive ability – to pick a single face from 1000 others in a few milliseconds. Anybody who may have ever played the classic “Where is Wally” can quickly understand this.
AI systems merely leverage mathematical algorithms to mimic a cognitive brain. However, cognition is not sentience. No machine has self-awareness. A machine does not possesses the capability to reprogram itself for anything else it was not designed to handle. At least not yet and not in the near future and therein lies our greatest advantage: how to act according to regulation and legislation. This is what luminaries like Professor Hawkins were warning us about. We cannot remain idle, trusting in some John Connor. It is important that the discussion around the ethics, regulation and societal impacts of AI begins now.
The real danger from AI today is societal; primarily job displacement, privacy concerns and the over delegation of human decisions to misunderstood platforms. AI will compete with many people for jobs previously considered secure and unassailable. While many jobs will become obsolete, the need for employees will not. This scenario is not new. Many will remember the fuss about IT replacing humans; how mechanisation removed the need for labourers in farms and factories and how sophisticated roles such as those of flight engineers are now redundant on modern airliners. It can be to white-collar jobs what assembly line automation did for blue-collar jobs. The educational system must however adapt to this, as it did in the 90’s and 00’s, to prepare future generation for the challenges of office automation, instant communications and instant gratification, it now needs to focus on what humans do best; how to think, rationalise and question.
[1] A. Ng, “Andrew Ng: Why AI Is the New Electricity | Stanford Graduate School of Business,” 2017. https://www.gsb.stanford.edu/insights/andrew-ng-why-ai-new-electricity.
[2] J. McCarthy, “Arthur Samuel : Pioneer in Machine Learning.” http://infolab.stanford.edu/pub/voy/museum/samuel.html.
[3] Sebastian Schuchmann, “History of the first AI Winter - Towards Data Science,” 2019. https://towardsdatascience.com/history-of-the-first-ai-winter-6f8c2186f80b.
[4] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986, doi: 10.1038/323533a0.
[5] “Alvey: the betrayal of a research programme | New Scientist.” https://www.newscientist.com/article/mg13017682-400-alvey-the-betrayal-of-a-research-programme/.
[6] “House of Lords - AI in the UK: ready, willing and able? - Artificial Intelligence Committee.” https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/10018.htm.
[7] B. Butler, “In Cloud Computing Price War Amazon and Google Drop Rates Again | CIO,” 2013. https://www.cio.com/article/2386980/in-cloud-computing-price-war-amazon-and-google-drop-rates-again.html.
[8] Jackson Joab, “Price war! Amazon cuts cloud costs to counter Google | Computerworld,” 2014. https://www.computerworld.com/article/2489105/price-war--amazon-cuts-cloud-costs-to-counter-google.html.
[9] S. Ranger, “Microsoft, AWS and Google may have just started the next cloud computing price war | ZDNet,” 2017. https://www.zdnet.com/article/microsoft-aws-and-google-may-have-just-started-the-next-cloud-computing-price-war/.
[10] A. N. Bloomberg, “Artificial Intelligence: Reality vs Hype - YouTube,” 2019. [Online]. Available: https://www.youtube.com/watch?v=NUUsICq5ySk. [Accessed: 23-Jan-2020].
[11] W. Knight, “Reinforcement learning,” Technology Review, vol. 120, no. 2. Massachusetts Institute of Technology, pp. 32–35, 01-Mar-2017, doi: 10.4249/scholarpedia.1448.
[12] F. Woergoetter and B. Porr, “Reinforcement learning,” Scholarpedia, vol. 3, no. 3, p. 1448, 2008, doi: 10.4249/scholarpedia.1448.
[13] Jia Deng, Wei Dong, R. Socher, Li-Jia Li, Kai Li, and Li Fei-Fei, “ImageNet: A large-scale hierarchical image database,” 2009, doi: 10.1109/cvprw.2009.5206848.
[14] “DARPA Grand Challenge - Wikipedia.” https://en.wikipedia.org/wiki/DARPA_Grand_Challenge.
[15] “Tesla Model 3 Driver Claims Autopilot Saved Lives In This Accident.” https://insideevs.com/news/392220/video-tesla-autopilot-life-saver-crash/.
[16] “Watch Tesla Model 3 On Autopilot Avoid Crash With Distracted Driver.” https://insideevs.com/news/391075/video-autopilot-prevents-tesla-crash/.
[17] “Watch Tesla Model 3 Autopilot Swerve To Avoid Car Passing On Shoulder.” https://insideevs.com/news/372165/video-tesla-autopilot-avoid-crash/.
[18] S. O’Kane, “Tesla hit with another lawsuit over a fatal Autopilot crash - The Verge,” 2019. https://www.theverge.com/2019/8/1/20750715/tesla-autopilot-crash-lawsuit-wrongful-death.
[19] J. Barnett, “The Army working on a battlefield AI ‘teammate’ for soldiers,” 2020. https://www.fedscoop.com/army-artificial-intelligence-ai-battlefield-systems/.
[20] “Catholic Church joins IBM, Microsoft in ethical A.I. push | Fortune.” https://fortune.com/2020/02/28/ai-ethics-vatican-microsoft-ibm/.
[21] “Algorithmic Accountability Act of 2019 Bill Text.” https://www.documentcloud.org/documents/5816234-Algorithmic-Accountability-Act-of-2019-Bill-Text.html.
[22] J. Emspak, “World’s Most Powerful Particle Collider Taps AI to Expose Hack Attacks - Scientific American,” 2017. https://www.scientificamerican.com/article/worlds-most-powerful-particle-collider-taps-ai-to-expose-hack-attacks/.
[23] J. McCormick, “Health Systems Look to AI to Prevent Sepsis Deaths - WSJ,” 2020. https://www.wsj.com/articles/health-systems-look-to-ai-to-prevent-sepsis-deaths-11580207401?mod=djemAIPro.
[24] Jared Council, “Bayer Looks to Emerging Technique to Overcome AI Data Challenges - WSJ,” 2020. https://www.wsj.com/articles/bayer-looks-to-emerging-technique-to-overcome-ai-data-challenges-11580121000?mod=djemAIPro.
[25] W. Hählen and S. Kapreillian, “RPA for Tax | Deloitte Global,” Deloitte Tax Solutions. https://www2.deloitte.com/global/en/pages/tax/solutions/rpa-for-tax.html.
[26] https://www.gartner.com/doc/reprints?id=1-1YBATZQ1&ct=200210&st=sb