The problem with bias begins with the layman’s fallacy that it is well understood and can thereby be relatively easily avoided. The subtle truth paints a different picture. Keenly aware of this, academics have postulated many mathematical definitions of bias, often contradictory, each aiming to achieve a fair outcome. This holds true for each AI model: Bias must be evaluated in ways compatible with the particular case in order to ensure the appropriate fairness objective.
Machine learning models can also be made to learn bias – which is why legislators and regulators are increasingly proliferated in guidance, frameworks and rules on the subject. AI is already delivering tremendous commercial and scientific value – yet organizations are increasingly aware that they must be used with caution. Executives recognize that inexperience in managing the technology can yield unexpected, even harmful results. Worse, unlike people driven processes, they can do so systemically and in ways difficult to detect. Legal, regulatory and reputational exposure is disproportionately high compared to traditional, less inherently scalable approaches.
Understanding the complex nuances of bias is the first major step towards effectively managing the risk. A suitable operating model and appropriately skilled staff are the necessary measures to act on this newfound awareness for this fairness dimension of model quality. Properly sensitized for bias, much of the existing modelling or software development operating model will suffice. That, together with correspondingly training for modelers (both to be conscious of bias and how to analyze it) will likely suffice to ward off the greatest dangers and associated unpleasant headlines – or fines.
Download the Deloitte Whitepaper “Striving for Fairness in AI Models” and learn more.