Skip to main content

The rise of data and AI ethics

Managing the ethical complexities of the age of big data

As technology tracks huge amounts of personal data, data ethics can be tricky, with very little covered by existing law. Governments are at the center of the data ethics debate in two important ways.​
Nihar Dalmia

GOVERNMENTS have defined almost every conceivable aspect of property ownership. Can I cut down my neighbors’ tree if it grows over my patio? Only those limbs that grow over the property line. Can I play music on my porch? Only if it doesn’t interfere with your neighbor’s enjoyment of their property. The complexity of the legal questions surrounding physical property is immense, but years of developing constitutions and legislation, as well as court decisions, have made the gray areas manageably small, and property owners and other citizens understand their property rights and responsibilities.

The same is not true of data rights. The entire notion of “data and AI ethics” has become of central interest for many individuals, businesses, and governments due to the burgeoning role of data in our lives. The internet, the Internet of Things, and sensors can track an astounding amount of data about a person—from sleep habits, to moment-to-moment location, to every keyboard click ever executed. Moreover, as artificial intelligence (AI) systems make more decisions, AI ethics become increasingly relevant to public policy. If a self-driving car faces a dangerous situation, should it choose the course least risky to the passengers or to a pedestrian—even if the pedestrian is at fault? Data ethics can be tricky, and very little of it is defined by existing law.

The United States Constitution guarantees “the right of the people to be secure in their persons, houses, papers, and effects”—but how does that apply to an individual’s data and privacy? In what ways may companies, or individuals, or even governments that collect data about an individual use that information?

Here are four of the biggest issues driving the conversation around data and AI ethics:

  1. Privacy. Citizens face widespread threats to their privacy, such as data collected on smartphones, while governments could potentially examine a citizen’s online activity. Law enforcement agencies worldwide are deploying facial recognition technology, and retail outlets have begun cataloging shoppers with facial recognition, which can be matched to their credit cards—often without customers’ awareness or consent.1 These occurrences are increasingly common.
  2. Lack of transparency. AI-based algorithms are often closely held secrets or are so complex that even their creators can’t explain exactly how they work. This makes it harder to trust their results. From bank loans to college admissions to job offers, decisions are often made based on data from these complex algorithms. Which decisions might be made by “secret” criteria? Which aren’t? And what role should government play in ensuring transparency?

  3. Bias and discrimination. Real-world bias can shape algorithmic bias. Some court systems have begun using algorithms to evaluate the criminal risk potential of criminal defendants and even begun using this data while sentencing. However, criminal risk scores and research have raised concerns over potential algorithmic bias and led to calls for greater examination.2

    Understanding how an algorithm works will not alone solve the broader issue of discrimination. The critical factor is the underlying data set. If the underlying data has historically comprised a certain gender, race, or nationality, then the results could be biased against cohorts outside of those groups.

  4. Lack of governance and accountability. One of the critical issues in the AI debate is the big question of who governs the AI system and data. Who creates ethical standards and norms? Who is accountable when unethical practices emerge? Who authorizes the collection, storage, and destruction of data?

These high-profile issues, in turn, are driving responses by stakeholders ranging from governments to corporations. To learn more, read Can AI be ethical? Why enterprises shouldn’t wait for AI regulations.

Government’s role in data and AI ethics

Governments are at the center of the data ethics debate in two important ways. First, governments “own” a massive amount of data about citizens, from health records to what books a citizen checked out of the library. Second, the government is a “regulator” of the corporate use of data collected online.

Governments are increasingly considering their regulatory responsibility. For instance, the European Union’s General Data Protection Regulation (GDPR) provides strict controls over cross-border data transmissions, gives citizens the right to be “forgotten,”3 and mandates that organizations, including government agencies, provide “data protection by design” and “data protection by default.”4

Similar to GDPR, the state of California’s Consumer Privacy Act aims for more stringent privacy requirements.5 Other planned global efforts include South Korea’s Ministry of Commerce, Industry, and Energy’s “Robots Ethics Charter” that provides manufacturers with ethical standards for programming the machines,6 and Canada and France’s international panel to rein in unethical use of AI.7

Evolving privacy standards and ethics frameworks

Many governments are formalizing their approach to algorithmic risks. The UK government, for example, has published a data ethics framework to clarify how public sector entities should treat data.8 Canada has developed an open-source Algorithmic Impact Assessment questionnaire that can assess and address risks associated with automated decision systems.9 The European Union, too, has been gathering comments from experts on its ethics guidelines for AI.10

Developing AI toolkits

Many big technology firms are also invested in addressing these challenges. IBM recently released the AI Fairness 360 open-source toolkit to check unwanted bias in data sets and machine learning models. Similar initiatives include Facebook’s Fairness Flow and Google’s What-If Tool.11 In another example, the Ethics and Algorithm Toolkit was developed collaboratively by the Center for Government Excellence, San Francisco’s DataSF program, the Civic Analytics Network, and Data Community DC.12

A consortium approach to AI ethics

Industry consortia are developing standards and frameworks in their industries; examples include:

  • The Council on the Responsible Use of AI formed by the Bank of America and Harvard Kennedy School’s Center for Science and International Affairs;13
  • A consortium in Singapore to drive ethical use of AI and data analytics in the financial sector;14 and
  • The Partnership on AI, representing some of the biggest technology firms, including Apple, Amazon, Google, Facebook, IBM, and Microsoft, to advance the understanding of AI technologies.15

Data signals

  • One hundred and seven countries have formulated legislation to protect data and privacy of citizens.16
  • Since 2013, 9.7 billion data records were lost or stolen globally.17
  • More than 95,000 complaints were received by the Data Protection Authorities under the EU GDPR legislation since its launch.18
  • From 2017 to 2018, media mentions on AI and ethics doubled. More than 90 percent of mentions indicated positive or neutral sentiment.19
  • The UK government launched a Centre for Data Ethics and Innovation with a £9 million budget.20

Moving forward

  • Acknowledge the need for ethics in the AI era. Create an AI ethics panel or task force by tapping into the expertise from the private sector, startups, academia, and social enterprises.
  • Create an algorithmic risk management strategy and governance structure to manage technical and cultural risks.
  • Develop governance structures that monitor the ethical deployment of AI.
  • Establish processes to test training data and outputs of algorithms, and seek reviews from internal and external parties.
  • Encourage diversity and inclusion in the design of applications.
  • Emphasize creating explainable AI algorithms that can enhance transparency and increase trust in those affected by algorithm decisions.
  • Train developers, data architects, and users of data on the importance of data ethics specifically relating to AI applications.

Potential benefits

  • More accountability from developers;
  • The rise of AI for social good; and
  • Growing ecosystem approach to AI.

Risk factors

  • Threat to citizens’ right to privacy;
  • Lack of transparency; and
  • Bias and discrimination.

Read more about data and AI ethics in the Chief Data Officer Playbook.

Cognitive Advantage

Deloitte’s Cognitive Advantage is a set of offerings designed to help organizations transform decision-making, business processes, and interactions through the use of insights, automation, and engagement capabilities. Cognitive Advantage is tailored to the federal government and powered by our cognitive platform. Cognitive Advantage encompasses technologies capable of mimicking, augmenting, and in some cases exceeding human capabilities. With this capability, government clients can improve operational efficiencies, enhance citizen and end-user experience, and provide workers with tools to enhance judgment, accuracy, and speed.

Learn more

The authors would like to thank Mahesh Kelkar from the Deloitte Center for Government Insights for driving the research and development of this trend.

The authors would also like to thank Diwya Shukla Kumar and Akash Keyal for their research contributions. Also Thomas Beyer, Christiane Cunningham, Patrick Wauters, Yang Chu, and Kashvi Bhachawat for reviews at critical junctures and contributing their ideas and insights to this trend.

Cover artist: Traci Daberko

  1. Cyrus Farivar, “Retailers want to be able to scan your face without your permission,” ArsTechnica, June 17, 2015.

    View in Article
  2. Julia Angwin et al., “Machine bias,” ProPublica, May 23, 2016; Joy Buolamwini et al., “Gender Shades,” MIT Media Lab, accessed April 30, 2019.

    View in Article
  3. Andrada Coos, “EU vs US: How do their data protection regulations square off?,” Endpoint Protector, January 17, 2017.

    View in Article
  4. European Commission, “What does data protection ‘by design’ and ‘by default’ mean?,” accessed April 30, 2019.

    View in Article
  5. Etelka Lehoczky, “California is bringing E.U.-style privacy laws to the U.S. Here's what you need to know,” Inc, accessed April 30, 2019.

    View in Article
  6. CBC, “Ethical code for robots in works, South Korea says,” March 7, 2007.

    View in Article
  7. Tom Simonite, “Canada, France plan global panel to study the effects of AI,” Wired, December 6, 2018.

    View in Article
  8. Gov.UK, “Data Ethics Framework,” August 30, 2018.

    View in Article
  9. Government of Canada, “Algorithmic Impact Assessment,” accessed April 30, 2019.

    View in Article
  10. European Commission, “Ethics guidelines for trustworthy AI,” April 8, 2019.

    View in Article
  11. David Schatsky et al., Can AI be ethical? Why enterprises shouldn’t wait for AI regulation, Deloitte Insights, April 17, 2019.

    View in Article
  12. Zack Quaintance, “Anti-bias toolkit offers government a closer look at automated decision-making,” Government Technology, September 24, 2018.

    View in Article
  13. Peter High, “Bank of America and Harvard Kennedy School announce The Council On The Responsible Use Of AI,” Forbes, April 23, 2018.

    View in Article
  14. Finextra, “Singapore's MAS preps guidelines for ethical use of AI and data analytics,” April 4, 2018.

    View in Article
  15. Partnership on AI, “About Us,” accessed April 30, 2019.

    View in Article
  16. United Nations, “Data protection and privacy legislation worldwide,” March 27, 2019.

    View in Article
  17. Varonis, “The world in data breaches,” accessed April 30, 2019.

    View in Article
  18. European Commission, “GDPR in numbers,” accessed April 30, 2019.

    View in Article
  19. David Schatsky et al., Can AI be ethical? Why enterprises shouldn’t wait for AI regulation.

    View in Article
  20. Gov.UK, “Tech sector backs British AI industry with multi million pound investment,” April 26, 2018.

    View in Article

Did you find this useful?

Thanks for your feedback

If you would like to help improve Deloitte.com further, please complete a 3-minute survey