Posted: 15 Jun. 2022 8 min. read

Establishing ethical guardrails in artificial intelligence

A blog post by Beena Ammanath, Executive Director of the Global Deloitte AI Institute and Trustworthy & Ethical Technology lead at Deloitte Consulting LLP and David Linthicum, Chief Cloud Strategy Officer, Deloitte Consulting LLP 

With any new emerging technology, we often rush to leverage its value and incorporate its POTENTIAL in current systems and processes, with little thought to the unforeseen consequences and negative impacts on society at large. This moral dilemma is nothing new. When cars were invented, they were a technological marvel in that they were able to easily transport people and things from point A to point B. But the roads and infrastructure weren’t equipped to handle them, lifesaving seatbelts had not yet been invented, and the need for speed limits hadn’t been considered.

The same can be said about artificial intelligence (AI): We’ve reached a point where available compute power and cloud technology has transformed AI from a scholarly topic to a powerful tool that can be leveraged in organizations today. And with that more widespread adoption, conversations around AI’s ethical implications and how to mitigate those negative impacts is more important than ever. 

Where we are with AI today

AI, especially when it’s cloud-enabled, can be a force multiplier and a tool for innovation. That’s because nearly everything around us, ranging from culture to consumer products, is a product of intelligence. That’s why it’s critical to discuss the ethical issues and implications of a ubiquitous AI. While there’s no doubt that AI has the potential to benefit consumers, businesses, and the wider economy, AI can also amplify risks and create new, unexpected challenges. 

As AI models become increasingly sophisticated with improved speed, scale, and complexity, as well as their capacity for autonomous decision-making, they’ve sparked considerable debate in tech circles. In recent years, as organizations apply more artificial intelligence/machine learning (AI/ML), the real legal and public concerns have made the ethics of AI a hot topic. For example, facial recognition algorithms developed by large companies have been reported to possess potential biases when it comes to detecting people’s gender due to the available training data sets.

We are undergoing one of the most interesting junctures in the AI and emerging technologies adoption curve, where the regulations and rules are waiting to mature, the best-practices playbook is not fully defined, and the responsibility of embedding ethical guardrails within AI systems is mostly a self-regulation endeavor imposed by businesses.

Addressing ethical implications of AI today and the importance of purpose before product

Historically, the technologists who build AI and any other foundational technology are trained to evaluate the value of the technology they’re developing—the positive things the technology can do—and often they build it because it can be built. However, technologists must look beyond building for the sake of building to solve a business pain point and consider the potential negative implications a technology could introduce. It’s imperative for technologists to consider these unintended consequences and proactively address them by placing the requisite guardrails to mitigate them.

But the question is, who should develop and document these overall regulations? While there likely will be policies and regulations made by industry and government think tanks specifying global and country-specific rules, for the near term, these are expected to be rolled out at an organization level. That means it is in the hands of the engineers and the scientists developing AI solutions to embed purpose in their AI products and ask the right questions.

As AI and other emerging technology becomes more powerful and the guardrails have yet to be developed, it’s important now more than ever for technologists to consider the implications of what they’re building. This requires continued awareness and dialogue around the importance of ethical AI. Doing so will facilitate a workforce that’s more purposeful about where they want to work and the products they want to build, which will inherently lead to the development of more ethical technology in general. 

Interested in learning more about ethics in AI? Listen to this podcast, “Trustworthy AI: What it is and who should make it happen,” where David Linthicum, chief cloud strategy officer, Deloitte Consulting LLP, chats with Beena Ammanath, executive director of the Global Deloitte AI Institute and Trustworthy & Ethical Technology lead at Deloitte Consulting LLP, about the reasons to establish ethical guardrails around the use of AI, as well as who should be responsible for developing and maintaining those guardrails—now and in the future. Beena is also the author of Trustworthy AI, which helps businesses navigate trust and ethics in AI.

Interested in exploring more on cloud?

Get in touch

David Linthicum

David Linthicum

Managing Director | Chief Cloud Strategy Officer

As the chief cloud strategy officer for Deloitte Consulting LLP, David is responsible for building innovative technologies that help clients operate more efficiently while delivering strategies that enable them to disrupt their markets. David is widely respected as a visionary in cloud computing—he was recently named the number one cloud influencer in a report by Apollo Research. For more than 20 years, he has inspired corporations and start-ups to innovate and use resources more productively. As the author of more than 13 books and 5,000 articles, David’s thought leadership has appeared in InfoWorld, Wall Street Journal, Forbes, NPR, Gigaom, and Lynda.com. Prior to joining Deloitte, David served as senior vice president at Cloud Technology Partners, where he grew the practice into a major force in the cloud computing market. Previously, he led Blue Mountain Labs, helping organizations find value in cloud and other emerging technologies. He is a graduate of George Mason University.

Beena Ammanath

Beena Ammanath

Global & US Technology Trust Ethics Leader | Trustworthy AI Leader

Beena leads Trustworthy AI & Technology Trust Ethics at Deloitte. She is the author of “Trustworthy AI”, a book that can help businesses navigate trust and ethics in AI. Beena has extensive global experience in AI and digital transformation, spanning across e-commerce, finance, marketing, telecom, retail, software products, services and industrial domains with companies across a variety of industries. Beena is also the Founder of non-profit, Humans For AI, an organization dedicated to increasing diversity in AI. Beena also serves on the Board of AnitaB.org and the Advisory Board at Cal Poly College of Engineering. Prior to her joining Deloitte, she was a Board Member and Advisor to several technology startups. Beena thrives on envisioning and architecting how data, artificial intelligence, and technology in general, can make our world a better, easier place to live for all humans.