Skip to main content

Automation Bias: What Happens when Trust Goes too Far?

7 min read

The UK Ministry of Defence (MOD) Artificial Intelligence (AI) strategy sets the vision to be “the world’s most effective, efficient, trusted and influential Defence organisation for our size. ”  Human machine teams, a major element of the plan, will flourish on the foundation of trust, but there are still basic challenges that must first be overcome.   The balance between trust and scrutiny may become unbalanced as society grows accustomed to and learns to trust AI. In this trap, automation bias can cause severe implications. This article will identify automation bias as a significant issue that the MOD must address through: 

  • designing Defence AI to be human-centric
  • understanding the intrinsic risk within each system 
  • defining the limits when a system can and must not be used  
  • training the organisation to detect and prevent automation bias.


What is automation bias?

 

Automation bias was initially recognised in the late 1990s as roles like pilots and nuclear power plant operators adopted automation technologies in an effort to remove human error. However, a strange phenomenon occurred, although the volume of errors decreased, the percentage of errors within human decisions increased. This phenomenon became known as automation bias. The bias is typically divided into two categories: a commission bias when decision-makers or operators accept automated recommendations, predictions, or actions despite evidence to show that the information is wrong. Whereas an omission bias results in operators failing to act when presented with signs of a problem since the automation tool has not notified them. Automation bias is distinct from model bias or data bias; instead of focusing on the AI itself, it is more concerned with how we interact with it.

A commission error might be a driver accelerating to 60 mph despite seeing a 30 mph road sign because a dashboard indicator stated 60 mph. In an omission error, the driver will knowingly ignore a turn they believe is correct because the navigation system does not tell them to take it.

Cognitive biases, such as automation bias, stem from psychological processes that allow the human brain to make decisions and act more rapidly. Our brain builds cognitive schemas through our experiences with the world around us, which we then use as short-cuts to help us understand and interpret incoming information. In most cases, these help us filter out unnecessary information, freeing resources for the more useful and complex information required for the task. In the case of automation bias, our schema for the system is established on high levels of trust, meaning we defer to the information provided by the machine. In most cases this would save us precious cognitive resources. However, increases in stress and pressure deplete our capacity, lead to over-reliance on these short-cuts and a failure to identify critical real-world signals, which could provide an alternative view. As a result, when these are applied to operational Defence use-cases, biases become both more likely and more dangerous.

So, if this was a challenge of the 1990’s why is this re-appearing? Well, the answer is nuanced, in the areas where automation has taken hold for some time, automation biashas become recognised and overcome. However, the advancement of AI has resulted in significantly more capable automation than previously seen, enabling its use in tasks that were unthinkable in the 90’s. So, despite being 30 years in the future, automation bias has not been eradicated, it has just found itself creeping into more complex technologies and other sectors . The UK Defence AI Strategy states that “Defence applications for AI stretch from the corporate or business space - the ‘back office’ - to the frontline”, showing intent to rapidly scale integration of AI into Defence. The frontline use cases are particularly likely to be time sensitive situations creating the perfect opportunity for automation bias to take hold. As with the industries before it, the challenge of automation bias can be reduced for Defence, but as it seeks to establish effective human machine teams, it could pose a difficult short-term obstacle.


How does it impact the modern battlespace?

 

To contextualise how automation bias could impact an operation, consider the use of computer vision within targeting systems. Sights which employ object detection algorithms already exist and have been heavily researched. With the drive for AI integration to achieve operational advantage, this use-case is likely to be an early adoption. Engagements are high stress and extremely fast, by nature the vastly increase the risk of automation bias. If an operator engages a target, initially identified by the object detection system, there are some key questions which must be explored:

  1. does this engagement carry the same weight of scrutiny as one without AI augmentation?
  2. how much was the operator’s belief influenced by the systems notification?
  3. did the operator really interrogate the target, or did they blindly trust the systems prediction?
  4. were there other factors that the system or the operator failed to detect or ignored which would have given a separate result?
  5. did the user really appreciate the accuracy level of the model?

Such an example is vulnerable to both omission and commission automation bias. For the 95% of accurate predictions, the bias is of no consequence, in fact the engagement speed is increased, further reinforcing the heuristic over time. But it is the 5% of incorrect engagements predictions, and the 1% of those which causes an innocent casualty where this behaviour moves from an ethical dilemma to one which could have legal ramifications. There are several perspectives from which to look at this dilemma:

The operator.

The operator is in a very challenging situation. They trusted the system, and undoubtably believed that the target was legitimate, and even that they had scrutinised the target effectively. Engagements are inherently pressured and time sensitive, making the operator susceptible to the cognitive shortcuts automation bias presents. The operator must make a judgement call between ‘what if this is not a target and I engage?’ and ‘what if it is a target and I do not engage?’. This judgement is a part of every engagement decision, however, in this case the argument is now weighted by the technology. As the target is presented, the operator now has to prove the target is not a threat, rather than identifying it initially, the fear and pressure of the situation makes this much harder. In this situation, the use of AI has unwittingly impaired the operator’s judgment, rather than improving it.

The law and policy.

The Law of Armed Conflict (LOAC) defines the principle of distinction as “a clear distinction between the armed forces and civilians, or between combatants and non- combatants, and between objects that might legitimately be attacked and those that are protected from attack.” In this scenario it would be difficult to state that the operator had correctly distinguished the target as a combatant, the sighting system had, but not the individual accountable for the engagement. This prompts the question of responsibility if the worst-case scenario was to happen. Ultimately the operator took the engagement decision, therefore holding overall accountability. The operator very likely had true, genuine belief that they were under imminent threat, but did automation bias prevent them from making reasonable efforts to distinguish the target as a combatant? Was that the operators fault? Or was it just the result of overselling, poor understanding, and an organisational failure to train effectively? With the extant policy, such an issue is almost an inevitability. A fear that should be very real throughout the command structure of Defence is that it will be the operators who are likely to be the first to stand in court and be victims to this ambiguity.

The developer.

Another dimension would be to consider this activity from the developer's point of view. If the system fails, then it was either built incorrectly or used for a purpose for which it wasn’t designed. Ultimately, the developer will be responsible for the output of the system, but the difference between being built incorrectly, or used incorrectly, will determine accountability for the incident. If the requirements failed to take into account potential risks of how human operators may interact with the AI system, impacting the decisions that the operator makes, is this the fault of the developer fault, or the requirements ? This is often a problem with Defence procurement, however as Defence’s understanding of AI grows, there may not be the rigour within requirements and testing to clearly define at who was at fault. It will be difficult to apply accountability for events such as automation bias. Ultimately this could make the UK MOD an unattractive customer, potentially making companies liable for actions outside of their control. A standardised, well-understood framework for documenting AI requirements, testing systems, and accepting AI into MOD service could help to reduce such uncertainty.

Public perception.

There is a lot of public scrutiny around Defence AI, whilst not without good reason, automation bias events could be very detrimental to the overall Defence AI strategy. Collateral damage has historically caused a significant challenge for Defence, however this would be greatly exacerbated, if it was the result of ‘AI gone wrong’. This could hamper trust and have long reaching consequences. Such an event is likely to capture the media, further compounding the impact. While a shift in public opinion is unlikely to completely halt AI adoption, it could cause delays in existing projects, a reduction in risk appetite, and over-restrictive governance, denying the innovative edge and losing the technical initiative.


How does Defence progress?

 

The UK’s aspiration of global AI leadership needs clarity on automation bias. Other industries have shown that it can be overcome. With this in mind there are four key proactive actions Defence could take in preventing automation bias:

  • design Defence AI to be human-centric
  • understand the risk within the human machine team
  • define when each system can and must not be used
  • train the organisation to detect and prevent automation bias.

Design Defence AI to be human-centric.

Commands, delivery agencies and industry have to think differently about how they design and deliver AI tools. Defence needs to understand the human behaviours that lead to automation bias, and build their requirements to mitigate them. Adopting frameworks and methodologies, such as human-centred design could enable Defence to embed the removal of automation bias into system design form the outset .

Understand the risk within each system.

Following from design, the next step that Defence should take is to develop a risk assessment framework to analyse the likelihood of issues like automation bias could occur. This should not only look at the likelihood of automation bias occurring, but also assess the environments and situations in which the risk is highest – based both on known human behaviours and biases, the AI systems and how these could potentially interact. Such assessments could be embedded into the existing acceptance testing process.

Define when each system can and must not be used.

An evolution of governance and policy, shaping clearly defined roles and responsibilities, would help to remove the ambiguity in the human machine relationship. Derived as an output of the risk assessment highlighted above, the use of artefacts like Operational Design Domain’s (ODD), taken from robotics1 would be a really valuable governance artefact, acting as a rubric for the user to understand how the system will act in specific situations and help form when and where a tool should be used . This could be compared to the Rules of Engagement cards issued in operational theatres.

Train the organisation to detect and prevent automation bias.

Finally, perhaps the most effective strategy is to be open and vocal about the existence of automation bias. Automation bias is not the first unconscious bias society has faced and we can lean upon the lessons from other areas. Awareness is a highly effective mitigation2 and should be supported by a significant uplift in the general knowledge of AI enabling operators and leaders to understand where and when the AI is the right answer. Being able to interrogate and understand the systems effectively, at all levels, and communicate the limitations clearly will go a long way in preventing automation bias.

Automation bias poses a threat to Defence AI given its low level of exposure and awareness. It can be overcome, but unless the MOD is proactive, its impact will be inevitable and significant. This paper has presented four specific steps that the UK MOD can take to start leading the way. Presenting a huge opportunity for the UK MOD to cement its position as a pioneer in ethical AI.

_________________________________________________________________________________________

References

1Operational Design Domain for Automated Driving Systems - Taxonomy of Basic Terms. Czarnecki, Krzysztof. 2018.
2Deloitte. Mitigating bias in performance management - Deloitte. [Online] 2020.
https://www2.deloitte.com/us/en/insights/topics/talent/mitigating-bias-in-performance-management.html.

Did you find this useful?

Thanks for your feedback

If you would like to help improve Deloitte.com further, please complete a 3-minute survey