Skip to main content

Ethical technology and trust

Applying your company’s values to technology, people, and processes

Every aspect of an organization disrupted by technology represents an opportunity to gain or lose stakeholders' trust. Leaders are approaching trust not as a compliance or PR issue but as a business-critical goal.

Catherine Bannister
Deborah Golden

A common refrain in Deloitte’s Tech Trends reports is that every company is now a technology company. With the advent of digital technology, businesses have been asking customers to trust them in new and deeper ways, from asking for personal information to tracking online behavior through digital breadcrumbs. At the same time, headlines regularly chronicle technology-based issues such as security hacks, inappropriate or illegal surveillance, misuse of personal data, spread of misinformation, algorithmic bias, and lack of transparency. The distrust these incidents breed in stakeholders—whether customers, employees, partners, investors, or regulators—can significantly damage an organization’s reputation.1 Indeed, consumer trust in commercial enterprises is declining, citizens are becoming wary of public institutions, and workers are asking employers to explicitly state their core values.2

In what we recognize as an emerging trend, some companies are approaching trust not as a compliance or public relations issue but as a business-critical goal to be pursued—one that can differentiate them in an increasingly complex and overfilled market. As discussed in Deloitte’s 2020 Global Marketing Trends report, brand trust is more important than ever for businesses—and it’s all-encompassing. Customers, regulators, and the media expect brands to be open, honest, and consistent across all aspects of their business, from products and promotions to workforce culture and partner relationships.3

Every aspect of a company that is disrupted by technology represents an opportunity to gain or lose trust with customers, employees, partners, investors, and/or regulators. Leaders who embed organizational values and the principles of ethical technology across their organizations are demonstrating a commitment to “doing good” that can build a long-term foundation of trust with stakeholders. In this light, trust becomes a 360-degree undertaking to help ensure that an organization’s technology, processes, and people are working in concert to maintain that foundation.

As the adage reminds us, trust is hard to gain and easy to lose.

The ethical technology terrain

The term ethical technology refers to an overarching set of values that is not limited to or focused on any one technology, instead addressing the organization’s approach to its use of technologies as a whole and the ways in which they are deployed to drive business strategy and operations.4 Companies should consider proactively evaluating how they can use technology in ways that are aligned with their fundamental purpose and core values.

Ethical technology policies do not replace general compliance or business ethics, but they should all connect in some way. Just as your approach to cybersecurity hasn’t taken the place of your company’s more general privacy policies, your ethical technology approach should complement your overall approach to ethics and serve as its logical extension in the digital realm. Some companies are expanding the mission of existing ethics, learning, and inclusion to include ethical technology, while maintaining separate technology ethics programs. Doing so helps keep technology ethics top of mind across the organization and encourages executives to consider the distinctions between technology-related ethical issues and broader corporate and professional ethics concerns.

The fifth annual study of digital business by MIT Sloan Management Review and Deloitte found that just 35 percent of respondents believe their organization’s leaders spend enough time thinking about and communicating the impact of digital initiatives on society. While respondents from digitally maturing companies are the most likely to say their leaders are doing enough, even then, the percentage barely breaks into a majority, at 57 percent.5

These findings suggest that organizations still have significant room to step into the lead. Those companies that develop an ethical technology mindset—demonstrating a commitment to ethical decision-making and promoting a culture that supports it—have an opportunity to earn the trust of their stakeholders.

In pursuit of trust

In the digital era, trust is a complex issue fraught with myriad existential threats to the enterprise. And while disruptive technologies are often viewed as vehicles for exponential growth, tech alone can’t build long-term trust. For this reason, leading organizations are taking a 360-degree approach to maintain the high level of trust their stakeholders expect.

In technology we trust

Artificial intelligence (AI), machine learning, blockchain, digital reality, and other emerging technologies are integrating into our everyday lives more quickly and deeply than ever. How can businesses create trust with the technologies their customers, partners, and employees are using?

  • Encode your company’s values. With technology ingrained in the business and machine learning driving business decisions and actions, an organization’s values should be encoded and measured within its technology solutions. Digital systems can be designed to reduce bias and enable organizations to operate in line with their principles.6 For instance, a city government worked with policy institutes to develop an algorithm toolkit intended to identify ways to minimize unintended harm to constituents by limiting biases in the criminal justice system and other institutions.

    Safeguards can promote stakeholder welfare by helping prevent users from engaging with technology in unhealthy or irresponsible ways. Examples include a company that imposes time and spending limits on habit-forming games, a content aggregator that prompts users to be skeptical about the veracity of crowdsourced information, and cloud computing providers that automatically issue alerts before customers go over budget.

    Explainable AI technologies can clarify how AI-driven decisions are made. For instance, to enhance confidence in AI-supported medical diagnoses, health care companies are developing solutions that assign each diagnosis a confidence score that explains the probability and contribution of each patient symptom (vital signs, signals from medical reports, lifestyle traits, etc.) to that diagnosis. Clinical professionals can see why the conclusion was made and make a different one if required.7
  • Build a strong data foundation. Without methodically and consistently tracking what data you have, where it lives, and who can access it, you cannot create an environment of trust. A strong data foundation unifies stakeholders around a single vision of data accountability and delivers on secure technology that supports effective data management.8 Leaders should aim to give stakeholders some control over how their data will be used and delete data on demand unless it’s necessary to keep it for legal or regulatory purposes.
  • Harden your defenses. Deloitte’s 2019 Future of Cyber Survey9 reveals that executives are increasingly spending significant amounts of time focusing on cyber issues, and rightly so. Cyber defenses represent your commitment to protect your customers, employees, and business partners from those who do not share their values—or yours. Cyber risk strategy should be built and managed from the ground up, embedded in the business mindset, strategy, and policies, not only within IT. Business leaders can collaborate with IT to create a comprehensive cyber risk strategy—encompassing security, privacy, integrity, and confidentiality—to help build stakeholder trust and drive competitive advantage. This requires considering the organization’s risk tolerance, identifying the most vulnerable gaps as well as the most valuable data and systems, then devising plans for mitigation and recovery.

What’s in a process

A strong foundation for ethical technology and trust will be shaped by the principles of an organization’s leaders and realized in business processes.

  • Respect stakeholder privacy. One of technology disruption’s most overarching effects has been to accelerate the collection, analysis, and dissemination of information. Not so long ago, the transactional details of our lives were kept in physical file cabinets, pulled out and referenced for specific needs. Today, systems routinely collect these details and combine them with our purchase histories, posts on social media, online searches, and even the route we drive to work each day.10 If consumers have reason to believe their data is being used in ways they don’t approve of, reactions can include calls for boycotts, public inquiries, and even severe penalties under strict regulations, such as the European Union’s General Data Protection Regulation and California’s Consumer Privacy Act. Companies should create data privacy policies that build, rather than erode, public trust. A natural first step can be to ensure the data usage aligns with the company mission.11 For instance, JD Wetherspoon, a pub company servicing the United Kingdom and Ireland, recently deleted more than 656,000 customer email addresses, since it perceived the emails as an intrusive approach to customer interaction that provides little value.12 This case highlights the importance of not only aligning data collection and usage to a company’s values but, by extension, supporting the company’s trust relationship with the customer.
  • Be transparent. Companies can build trust with stakeholders by proactively and transparently demonstrating good behavior. “Transparency becomes vital and important,” says AI Global executive director Ashley Casovan.13 “Whether or not people are interested in seeing the resources and data behind it doesn’t really matter. Simply knowing that companies have transparent policies provides more confidence that they are doing the right thing.” Transparency extends beyond policies explaining data collection and usage practices. For instance, rather than masquerade as humans, intelligent agents or chatbots should identify themselves as such. Companies should disclose the use of automated decision systems that affect customers14 and should stay focused on the customer when problems occur, providing both speed and quality in response. The fallout from negative incidents need not include customer loss or reputation-damaging headlines.15
  • Respect differing cultural norms. An organization’s overall approach to building trust is informed by interests, experiences, and professional standards as well as societal norms and government controls. It can be challenging to serve a global market in which expectations on government surveillance or law enforcement cooperation vary widely. For example, what is expected surveillance in some countries might seem outrageous elsewhere; cooperation with law enforcement is routine in many countries but perhaps unwise in places with rampant corruption or lack of protection for political or religious rights. Some countries have very specific regulations around gaining explicit customer consent to data usage; other municipalities are passing legislation, such as banning facial recognition technology, that can conflict with other rulings. Effective governance of emerging technologies requires all relevant stakeholders—industry, consumers, businesses, governments, academia, and society—to work together. Businesses can play a key role in helping governments as they develop laws and standards that increase the reliability of emerging technologies16—frank, candid discourse about new technologies, for example, could lead to new rules and guidance concerning matters of privacy, transparency, inclusivity, accessibility, inequality, and more.17

Empower the people

Since technology is arguably used by most if not all individuals within an organization, ethical technology and trust is a topic that touches everyone.

  • Deploy the power of all. Companies can waste time and money creating something that excludes a customer group or providing a service with undesirable side effects. Perhaps even worse, they may build solutions that undermine customer trust. Often, design dilemmas begin with a homogeneous group of people designing products, processes, or services without thinking through how other groups of people might be affected. Leading companies are changing this dynamic by creating teams and roles that reflect their diverse customer base and bringing in multiple viewpoints from different industries, economic backgrounds, educational experiences, genders, and ethnic backgrounds.18 A 2013 Harvard survey revealed that organizations with leadership teams that have a combination of at least three inherent (ones you are born with) and three acquired (ones you gain through experience) diversity traits out-innovate and outperform the others; these organizations are 45 percent more likely to report growth in market share and 70 percent more likely to report capturing a new market.19
  • Teach them to fish. Training technologists to recognize their own biases, and to eliminate bias in the products they create is an important step toward creating a culture that emphasizes trust. But it is only one step. Building awareness of how technology affects stakeholder trust in those not directly involved or responsible for technology and creating associated decision-making frameworks are additional steps organizations should consider. This is especially important in non–digital native organizations, where the ripple effects of day-to-day uses of technology may be less obvious to leaders and teams. Companies should consider what resources may be needed to help their employees recognize ethical dilemmas, evaluate alternatives, and make (and test) ethical technology decisions.20
  • Give employees a reason to trust. Much of the anxiety over AI and other advanced technologies stems from the fear of the displacement of labor. From an ethical perspective, this presents business leaders with a challenge: balancing the best interests of the business, the employees, and the wider community and society. It’s a task made more complex by the fact that advanced technology systems are not self-sufficient. While AI can replace some jobs, for example, it creates others that often require specialized skills and training.21 Companies can build trust with employees by advising them how technology may affect their jobs in the future. This could include retraining workers whose roles may evolve and who will likely work with automated systems.22

360 degrees of opportunity

Companies that don’t consider technology to be their core business may assume that these considerations are largely irrelevant. In truth, no matter the industry or geography, most organizations are increasingly reliant on advanced digital and physical technologies to run their day-to-day operations.

While there is so much emphasis on the challenges disruptive technologies bring and the existential threats to an organization’s reputation when technology isn’t handled correctly—whether through misfeasance or malfeasance—these same disruptive technologies can be used to increase transparency, harden security, boost data privacy, and ultimately bolster an organization’s position of trust.

For example, organizations can pivot personalization algorithms to provide relevant recommendations based on circumstance—for example, offer an umbrella on a rainy day rather than an umbrella after someone buys a raincoat. By focusing on relevance rather than personalization, AI recommendations are likely to seem more helpful than invasive.23

Deloitte surveys have found a positive correlation between organizations that strongly consider the ethics of Industry 4.0 technologies and company growth rates. For instance, in organizations that are witnessing low growth (up to 5 percent), only 27 percent of the respondents indicated that they are strongly considering the ethical ramifications of these technologies. By contrast, 55 percent of the respondents from companies growing at a rate of 10 percent or more are highly concerned about ethical considerations.24

After all, the pursuit of trust is not just a 360-degree challenge. It is also a 360-degree opportunity.

Lessons from the front lines

A healthy foundation for trust

Disruptions in the health care industry—including new care delivery models, consumer demand for digital experiences, declining reimbursements, and growing regulatory pressures—are driving many health care organizations to use technology to improve efficiency, cut costs, and improve patient care. And there could be an inadvertent benefit: Technology could help health care systems build trust with patients and providers.

Providence St. Joseph Health (PSJH) is leveraging technology to adhere to its mission of improving the health of underprivileged and underserved populations, says B.J. Moore, CIO of PSJH.25 Technology is helping the Catholic not-for-profit health system simplify complex experiences to enhance caregiver and patient interactions, modernize the operating environment and business processes, and innovate with cloud, data analytics, AI, and other technologies to help improve patient care.

In the process, PSJH is building trust. For example, the organization is collaborating with technology partners to standardize cloud platforms and productivity and collaboration tools across its 51 hospitals and 1,085 clinics, a move that will improve provider and patient engagement and enable data-driven clinical and operational decision-making. It also aims to develop the first blockchain-powered integrated provider-payer claims processing system. Such technological breakthroughs can increase trust—but careless deployment and negligence can quickly erode it. That’s why Moore has doubled down on establishing and maintaining a solid technology foundation for innovation and, by extension, trust. “Technology holds so much promise for helping patients at scale,” he says. “But it also has the potential to cause damage at scale.”

For example, data analytics, AI, and machine learning can help researchers and clinicians predict chronic disease risk and arrange early interventions, monitor patient symptoms and receive alerts if interventions are needed, estimate patient costs more accurately, reduce unnecessary care, and allocate personnel and resources more efficiently. When patients understand these benefits, they’re generally willing to share their personal and health information with care providers. But their trust could diminish—or vanish—if weak data security or governance protocols were to result in a data breach or unauthorized use of private health information. This could cause patients to conceal information from care professionals, lose confidence in diagnoses, or ignore treatment recommendations.

A number of industry regulations help ensure patient privacy and safety, and PSJH has another effective governance and oversight mechanism: a council of sponsors, consisting of clergy and laypeople, that holds moral accountability for PSJH’s actions in service of its mission. Sponsors help develop guidelines that ensure adherence to mission and values and advise the organization’s executive leadership and board of trustees on trust-related technology matters, such as the ethical use of data and the impact of technology on employees and caregivers.

“We’re continuously working to raise awareness of technology’s role in improving health,” Moore says. “Educating and communicating with patients, care professionals, regulatory bodies, and other key stakeholders can help prevent potential barriers to rapid experimentation and innovation and allow us—and our patients—to fully experience the benefits of technology.”

Do what’s right: CIBC’s strategic approach to building trust and engagement

CIBC is using technology to understand and anticipate individual client needs with the goal of delivering highly personalized experiences—an initiative they call Clientnomics™. Terry Hickey,26 CIBC’s chief analytics officer, recognized that AI-based algorithms could deliver the client insights required to drive Clientnomics but that to be successful, leaders needed to understand and share with employees how AI will complement and support the work they’re doing, versus replacing their jobs. The bank also needed to maintain clients’ trust by protecting their data and governing its use.

In early 2019, leaders from the bank’s analytics, risk, and corporate strategy teams collaborated to develop an organizationwide AI strategy, which CIBC’s senior executive committee and board of directors approved. At the heart of the strategy are guiding principles that address questions such as: When will we use the technology? When will we not use it? How do we ensure that that we have our clients’ permission?

To reinforce employee trust, the strategic plan stated that a primary purpose of AI would be to augment employees’ capabilities to achieve company goals. Leaders agreed to focus on funding AI use cases that support employees in their roles and improve practices that aren’t currently optimized.

With the strategy in place, the next step was to build an AI governance process to ensure that new technology projects comply with the strategy and guiding principles. When a new project is proposed, stakeholders answer a series of questions that help them plan and document what they want to accomplish. These questions cover a broad range of ethical considerations, including project goals, possible inherent biases, and client permissions. Approved project documents are stored in a centralized library that regulators, internal auditors, and other reviewers can reference to explain the thought process behind the algorithm or model.

CIBC has also developed advanced analytic techniques to help govern its use of data—for instance, encoding client data in a way that it cannot be reverse-engineered to identify an individual. The analytics team also devised a way to assign a data veracity score—based on data quality and integrity, possible bias, ambiguity, timeliness, and relevance—to each piece of information that could be used by an algorithm. The algorithmic models are designed to recognize and treat the data veracity appropriately, supporting more reliable, trustworthy, and engaging interactions.

As the analytics team launches Clientnomics, members are focused on developing customized AI-supported client experiences rather than large-scale technology projects. So far, they have accumulated 147 use cases, completing 40 in the first year.

For example, when a client calls CIBC’s contact center, a predictive model dynamically configures the interactive voice response menu based on the client’s recent transactions and offers the most relevant information at the top of the menu. The bank aims to cement client relationships over time with a continuous string of personalized interactions.

“In my previous role,” Hickey says, “I spent a lot of time with organizations around the world. Everyone talked about the benefits and future potential of AI, and some completed proofs-of-concept, but few were able to implement them, especially in banking and finance. By proactively addressing how we will—and will not—use technology, CIBC has embraced the positive benefits it can deliver to employees and clients. All of this in less than a year.”

Trust encoded in Abbott’s DNA

In the health care industry, trust is a primary driver of patient behavior: Trusted organizations have an edge in influencing behaviors that can create more positive health outcomes. For 130-year-old global health care company Abbott, trust is top of mind as it evolves and expands its portfolio of diagnostic products, medical devices, nutritionals, and branded generic medicines, says CMO Melissa Brotz.27

With technology-driven products such as sensor-based glucose monitoring systems, smartphone-connected insertable cardiac monitors, and cloud-connected implantable defibrillators and pacemakers, Abbott takes a multifaceted approach to trust, adds CIO Mark Murphy.28 Across the enterprise and its connected technologies, this includes comprehensive data protection policies, employee training programs, and an external ecosystem of trust-based partners, and other components.

For example, Abbott is exploring multiple data-enabled opportunities to improve health care, such as a machine learning solution that combines performance data from the company’s diagnostics platforms with global clinical and patient demographic data to help health care providers diagnose heart attacks.29 To safeguard patient data and privacy—a core facet of trust—Abbott has enacted a number of enterprisewide policies, procedures, and annual employee training and certification programs related to data handling and protection and compliance with national and global regulatory mandates. Leaders have also made significant investments in cybersecurity capabilities and controls embedded into product designs, which is increasingly critical for a company such as Abbott, with products and services that are heavily connected and integrated—often with other products, systems, and apps.

In addition, ensuring patient trust is a responsibility that falls to each of Abbott’s 103,000 employees, from the board of directors and C-suite leadership to researchers, product designers, and engineers. Company leadership, for instance, is involved in data and product security oversight groups and board subcommittees, while employees participate in rigorous education programs on the implications of data privacy, security, and transparency. “Abbott is focused on helping people live better, healthier lives,” Murphy notes. “Often, technology is the enabler to help us do that, but it always starts with the patient. We know that when we build technology, we are doing so on behalf of the person who wears it, accesses it, or lives with it inside their body. And that means we must protect it—securely and responsibly.”

Abbott also relies on a strong external ecosystem to maintain patient trust. Independent third parties and research groups test Abbott’s products and services and assess their vulnerabilities on an ongoing basis. For example, the company is part of the #WeHeartHackers initiative, a collaboration between the medical device and security research communities that seeks to improve medical device security. At a recent event, Abbott teamed with university researchers to build a mock immersive hospital that enabled researchers to practice cybersecurity defense techniques.30

Rounding out Abbott’s trust ecosystem are patients and care providers themselves. To learn what concepts such as trust, security, and privacy mean to the different users of its products and services, the company regularly holds focus groups with them and produces educational material to raise awareness of these issues.

Ultimately, Brotz says, data-enabled technologies that help people live better lives are an extension of the lifesaving products and services that patients and their care providers have trusted for 130 years. “Patients place the highest levels of trust in us, and we take it very seriously,” she says. “It’s part of our DNA. Our greatest responsibility is to keep them and their data safe and secure.”

Rebuilding security from the ground up to maintain customer trust

Because a company’s approach to technology directly affects stakeholder trust in its brand, businesses that are leveraging advanced technologies can benefit from considering the technologies’ impact on ecosystem partners, employees, customers, and other key stakeholders. Strong security controls and practices are foundational elements for building and maintaining stakeholder trust. Recognizing the impact of security breaches on customer trust, Google went beyond the expected table stakes by completely redesigning its security model to protect enterprise systems and data.

A decade ago, as Google moved internal applications and resources to the cloud, its security perimeter was constantly expanding and changing, complicating the defense of its network perimeter. At the same time, companies were seeing more sophisticated attacks by nation-state-sponsored hackers, testing the limits of the perimeter-based model of security. Hence, Google decided to completely overhaul its security approach and implement a new security model that turned the existing industry standard on its head, says Sampath Srinivas, Google product management director for information security.31

Google security experts could no longer assume that walling off the network would provide the security required to maintain system integrity and customer trust. They sought to reinvent the company’s existing security architecture, since the traditional castle-and-moat model—based on a secure network perimeter with VPN-based employee access to internal applications—was no longer adequate. The goal: to ensure that employees could use any corporate application from any location on any device as easily as if they were using Gmail and as safely as if they were in a Google office.

Google embraced the zero-trust concept, an innovative security model that eliminates network-based trust, Srinivas says, instead applying access controls to applications based on user identity and the status of their devices, regardless of their network location.

Google’s zero-trust security strategy treats every single network request as if it came from the internet. It applies context-aware access policies to clues such as user identity, device attributes, session information, IP address, and context of the access request itself, collected in real time by a device inventory service. A globally distributed reverse proxy server protects the target server, encrypts traffic to protect data in transmission, and acts as a sophisticated rules engine that determines access rights based on the user and device’s context, such as whether it is fully patched. Every access request is subject to authentication, authorization, and encryption. To protect against phishing, the company—working with the industry in the FIDO Alliance standards organization—developed and deployed a new form of cryptographic hardware two-factor authentication called Security Keys.32

Today, Google’s user- and device-centric security workflow allows authorized users to securely work from an untrusted network without the use of a VPN. The user experiences internal applications as if they were directly on the internet. Employees, contractors, and other users can have a seamless user-access experience from any location by simply typing in a web address—a process that dramatically reduces support burden. “To deliver on our goal of maintaining customer privacy and trust, we had to look beyond the status quo solutions, innovate, and take risks,” Srinivas says. “When we broke with tradition and changed the way we thought about our security infrastructure, it allowed us to develop a more effective way to protect data and systems.”

My take

David Danks, PhD, Professor of philosophy and psychology, Carnegie Mellon University

When I speak with leaders in the corporate world, they often ask for advice on how to build a brand that customers and employees trust. As we talk, I find that some haven’t carefully thought about what they mean by “trust.” Some define it subjectively, like a warm fuzzy feeling. At the other end of the spectrum, others assume that if a customer is willing to use a service or product, that action alone implies trust. I believe that neither of these definitions is complete nor accurate.

To me, trust is a willingness to make yourself vulnerable because you expect the broader system to act in ways that support your values and interests. That doesn’t mean that you expect the company will never make a mistake or experience an unintended outcome. Instead, what’s important is that if something goes wrong, you’re confident that the company will take care of it.

This definition applies even if a company’s product isn’t 100 percent reliable. For example, I’m more likely to buy from a company I trust, even if its product is occasionally unreliable, because I’m confident that if something goes wrong, the company will take care of me and my interests. I’m less likely to buy from a company offering a highly reliable product if I’m concerned that if the unexpected happens, I’ll be left to deal with the consequences on my own.

So how should corporate leaders approach trust? The first step is to think through the relevant values and interests of both the company and its stakeholders. What are the things that matter to customers, users, employees, and shareholders? This question supports a discussion about how the product or service could advance, protect, or impair those stakeholder groups.

The second step is related to design. How can the organization design a product or service that supports or endorses those relevant values? This is where ethics comes in. From my perspective, ethics is about asking two questions: What values should we have? Then, given these values, what shall we do to advance them? Of course, sometimes values conflict, which pushes organizations to think about the problem differently. Can we design the product in such a way that we don’t have to choose? This design approach can generate innovative and trusted products.

It’s impossible to totally avoid unexpected consequences, but leaders who bring together multidisciplinary product teams can improve the odds in their favor. A team made up of people from a variety of backgrounds and cultures—who feel free to openly share their experiences and opinions—can often uncover creative design solutions or potential design issues. But when a conflict in values is unavoidable, leaders must make intelligent, self-aware, deliberate choices. A leader should decide what’s most important to the company—and own it.

Most leaders already know the right action to take if the ultimate goal is to build trust. But some care more about cost reduction. Or increased efficiency. Or speed to market. The list goes on. And that’s fine. Leaders can choose to build things that don’t increase user trust, if they understand why they are making that choice and are willing to accept the consequences—expected or unexpected. Problems occur when leaders make choices that damage trust without realizing what they are doing.

Another misconception that leaders often have is that being ethical conflicts with being profitable. This is a false dichotomy. Companies have proven that they can produce reliable, powerful, user-friendly—and profitable—products. And while the products may not perform perfectly all the time, trusted companies have ways to monitor and detect problems, as well as methods for addressing issues quickly and effectively.

My dream is that within 20 years, corporate leaders won’t need to ask ethicists or other advisers about human or societal impacts that could result from product design decisions. I hope the answers will be internalized into corporate cultures so that asking questions such as “Are we sure this is a good idea?” is just part of what organizations consistently do.

Executive perspectives


A company’s brand is, by definition, a contract of trust. Yet in business, brand trust can erode overnight. CEOs and C-suite leaders across the organization can communicate the importance of trust to their company’s mission and establish clear ethics guardrails. Indeed, establishing clear policies for ethical usage of technology—an important first step in earning trust—could benefit their businesses. Ultimately, individual employees are acting based on their best understanding and awareness of an organization’s policies and values. This is no small matter. They will be making deliberate decisions about trust that will manifest in their company’s strategy, purpose, and market performance. Moreover, if leaders don’t own the trust and ethics agenda, decisions will be made in a diffuse way. CEOs have an opportunity to provide clarity, education, and ongoing communication. With the entire enterprise aligned behind the C-suite’s guidelines on ethical technology and trust, CIOs can help ensure that tech strategies, development efforts, and cyber approaches support those guidelines.


One of the finance function’s primary responsibilities is to build and maintain trust among customers, business partners, and investors. Yet rising expectations of transparency are making it more difficult for finance to meet this responsibility. Consider this scenario: Using drone-based cameras, analysts identify a potential issue in your company’s manufacturing or distribution facilities that your operations team missed. Analysts unexpectedly bring up the issue on an earnings call. Markets now expect companies to respond to situations such as this in near real time. Failure to do so raises doubts, which in turn can erode market trust. To meet this challenge, finance organizations will likely need to collect more data from across the enterprise and deploy advanced analytics that enable real-time reporting. They can also collaborate with peers to educate employees on the value that ethics and trust help create. Finally, CFOs will be able to help their companies deliver the kind of detailed, accurate, and timely responses that markets—and the analysts and investors who watch them—demand.


Cyber risk threat vectors have evolved rapidly, and attacks have become increasingly sophisticated, deliberate, and unrelenting in nature. Fifty-seven percent of companies participating in Deloitte’s 2019 Future of Cyber Survey experienced their most recent cyber incidents within the past two years.33 And the risk isn’t just that cyber incidents will destroy trust in the classical sense. The opportunity cost of what cyber vulnerabilities can prevent organizations from doing can be far greater: The specter of cybercrime and its fallout can cast a shadow over an organization’s efforts to turn technology to better use, strangling innovation and slowing digital transformation efforts to a crawl. It can also affect the bottom line, quickly and dramatically. One survey found that 48 percent of respondents had stopped using online services that reported data breaches.34 The issues of ethical technology and trust will steadily capture CXO mindshare. CIOs have a responsibility to help other enterprise leaders become more tech-savvy and understand the impact their digital strategies can have on the organization’s trust brand.

Are you ready?

  1. What is your current position of trust with your employees, customers, partners, shareholders, and community?
  2. On a scale of “What’s ethical technology?” to “We understand the ethical ramifications of our tech initiatives” to “We have ethical technology policies in place,” where does your leadership team rank?
  3. How do you measure each new digital initiative against its alignment to organizational values and impact on stakeholder trust?

Learn more

  1. Accelerating digital innovation inside and out: Learn how digitally mature organizations use ecosystems and cross-functional team to innovate in new ways.
  2. Resilient podcast series: Listen to this award-winning podcast featuring conversations with leaders who have tackled risk, crises, and disruption.
  3. Ethical technology use in the Fourth Industrial Revolution: Explore leadership strategies for confronting ethical issues associated with Industry 4.0 technologies.

Senior contributors

Ethical technology and trust

Dan Frank
Deloitte & Touche LLP

Kirsty Hosea
Deloitte Touche Tohmatsu

Louise Nickson
Deloitte MCS Limited

Dalibor Petrovic
Deloitte LLP

Yang Chu
Senior manager
Deloitte & Touche LLP

Mariahna Moore
Senior manager
Deloitte Consulting LLP

Anand Ananthapadmanabhan
Deloitte & Touche LLP

Anu Widyalankara
Deloitte MCS Limited

Leo Barbaro
Deloitte MCS Limited

Executive perspectives authors


Benjamin Finzi
Managing director
Deloitte Consulting LLP


Ajit Kambil
Managing director
Deloitte LLP

Moe Qualander
Deloitte & Touche LLP


Deborah Golden
Deloitte & Touche LLP


Cover image by: Vasava


    Nancy Albinson, Sam Balaji, and Yang Chu, Building digital trust: Technology can lead the way, Deloitte Insights, September 23, 2019.


    View in Article
  2. Edelman, “2019 Edelman Trust Barometer,” January 20, 2019.

    View in Article

    Diana O'Brien et al., 2020 Global Marketing Trends: Beyond technology to connection, Deloitte Insights, October 15, 2019.


    View in Article
  4. Catherine Bannister, Brenna Sniderman, Natasha Buckley, "Ethical tech: Making ethics a priority in today’s digital organization," Deloitte Review, January 27, 2020.

    View in Article

    Gerald C. Kane et al., Accelerating digital innovation inside and out, Deloitte Insights, June 4, 2019.


    View in Article
  6. Deloitte, AI ethics: A new imperative for businesses, boards, and C-suites, accessed August 30, 2019. 

    View in Article

    Albinson, Balaji, and Chu, Building digital trust.


    View in Article
  8. Cynthia Dwork and Vitaly Feldman, “Privacy-preserving prediction,” Conference on Learning Theory, 2018; David J. Wu, “Fully homomorphic encryption: Cryptography’s holy grail,” March 27, 2015.

    View in Article
  9. Deloitte, 2019 future of cyber survey, accessed December 24, 2019.

    View in Article
  10. Tracy Kambies et al., Dark analytics: Illuminating opportunities hidden within unstructured data, Deloitte Insights, February 7, 2017; Tiffany Hsu, “They know what you watched last night,” New York Times, October 25, 2019.

    View in Article

    Diana O’Brien et al., Are you a trust buster or builder?, Deloitte Insights, October 15, 2019.


    View in Article
  12. Rowland Manthorpe, “Wetherspoons just deleted its entire customer email database—on purpose,” Wired, July 3, 2017. 

    View in Article
  13. Ashley Casovan (executive director, AI Global), phone interview with authors, October 4, 2019.

    View in Article

    David Schatsky et al., Can AI be ethical?, Deloitte Insights, April 17, 2019.


    View in Article
  15. Deloitte, Taking a customer-centric approach to a data breach, July 2018. 

    View in Article
  16. Microsoft, The Future Computed: Artificial Intelligence and Its Role in Society (Microsoft, 2018), p. 64.

    View in Article
  17. Deloitte, Ethics in the age of technological disruption: A discussion paper for the 2018 True North Conference, 2018.

    View in Article

    Kavitha Prabhakar, Kristi Lamar, and Anjali Shaikh, Innovating for all: How CIOs can leverage diverse teams to foster innovation and ethical tech, Deloitte Insights, November 18, 2019.


    View in Article
  19. Sylvia Ann Hewlett, Melinda Marshall, and Laura Sherbin, “How diversity can drive innovation,” Harvard Business Review, December 2013. 

    View in Article
  20. Bannister, Sniderman, and Buckley, "Ethical tech."

    View in Article
  21. Deloitte, Ethics in the age of technological disruption.

    View in Article
  22. Mark MacCarthy, “Planning for artificial intelligence’s transformation of 21st Century jobs,” CIO, March 6, 2018; Rachel Louise Ensign, " Bank of America's workers prepare for the bots," Wall Street Journal, June 19, 2018; Genpact, “New ways of working with artificial intelligence,” accessed March 27, 2019. 

    View in Article

    O’Brien et al., Are you a trust buster or builder?


    View in Article
  24. Timothy Murphy et al., Ethical technology use in the Fourth Industrial Revolution, Deloitte Insights, July 15, 2019.

    View in Article
  25. B.J. Moore (CIO, Providence St. Joseph Health), phone interview with authors, October 10, 2019.

    View in Article
  26. Terry Hickey (chief analytics officer, CIBC), phone interview with authors, September 16, 2019.

    View in Article
  27. Melissa Brotz (CMO, Abbott), phone interview with authors, November 8, 2019.

    View in Article
  28. Mark Murphy (CIO, Abbott), phone interview with authors, November 8, 2019.

    View in Article
  29. Nicholas Fearn, “Artificial intelligence can help doctors better detect heart attacks,” Forbes, September 10, 2019.

    View in Article
  30. Joseph Marks, “The Cybersecurity 202: Hackers are going after medical devices—and manufacturers are helping them,” Washington Post, August 8, 2019.

    View in Article
  31. Sampath Srinivas (product management director for information security, Google), phone interview with author, November 11, 2019.

    View in Article
  32. Eran Feigenbaum, “The key for working smarter, faster and more securely,” G Suite, April 2015.

    View in Article
  33. Deloitte, The future of cyber survey 2019.

    View in Article
  34. Jason Reed and Jarad Carleton, The global state of online digital trust: A Frost & Sullivan white paper, Frost & Sullivan, 2018.

    View in Article

Did you find this useful?

Thanks for your feedback

If you would like to help improve further, please complete a 3-minute survey