There is no shortage of pontificating and handwringing over the ethics of AI, and views range from a future of abundance to dystopia. Often, the matter is reduced to concerns over bias. While that is a valid issue, it is just one of several dimensions of trust that should have purposeful treatment and effective governance.
Trustworthy AI does not emerge coincidentally. It takes purposeful attention and effective governance. Indeed, the path from conceiving an AI use case to deploying the model at scale is paved with critical decisions based on careful assessment of impact, value, and risk. Creating and using Trustworthy AI takes more than a discrete tool or a periodic review. It should have a larger governance structure that permeates the entire organization. Taking an end-to-end view, what's needed is an alignment of people, processes, and technologies that together promote effective AI governance, and ultimately, AI solutions we can trust.
When an organization orients its AI initiatives toward an intentional focus on ethics and trust, the reward is often a greater capacity to promote equity, foster transparency, manage safety and security, and in a structured way, address the ethical dimensions of AI that are relevant and important for each use case and deployment. To reach this future state, there are a number of considerations and priority issues in mobilizing people for AI governance, enhancing processes and controls, and using technology to bolster trust.
Across the AI life cycle, there are many critical stakeholders, each of whom brings a diverse perspective and priorities. Whether it is an executive, a plant floor operator, or an IT professional, each stakeholder has a role to play in promoting Trustworthy AI. Some important areas for attention include:
Operationalizing Trustworthy AI typically requires creative thinking among business leaders, critical analysis throughout every stage of the AI life cycle, and reliable assurance that the tools and the system around them are meeting the relevant dimensions of trust. Every business is different and faces challenges and priorities unique to their business strategy and objectives, which results in different opportunities to successfully leverage AI capabilities. As such, there is no one-size-fits-all framework for effective AI processes. Instead, conceiving and implementing processes takes key activities for devising the right processes to govern the enterprise’s AI programs.
While much of the work of transforming the workforce and processes to foster Trustworthy AI is rooted in human planning and decision-making, there is clearly a role for technology.
Define the vision – A catalyzing tactic is bringing the organization’s leaders together to develop a holistic, equitable approach to creating and using Trustworthy AI. This is not simply a sporadic “business as usual” C-suite meeting. Rather, in a conducive setting with focused goals, leadership can identify the AI’s purpose and how and whether it is delivering its intended outcomes.
Identify the risks – Risk analysis is familiar territory for business leaders, and the same principles apply for AI. Risk management strategies and ongoing model risk assessments can help the enterprise prepare for and guard against external sources that could negatively affect the model and the business strategy.
Identify the gaps – To know how to make process changes, the business should understand where gaps exist in AI risk controls. Building on the risk analysis, organizations can implement or adapt processes and controls to support AI governance. Importantly, AI governance encompasses more than coding of the model. It also includes the broader infrastructure necessary for successful implementation and oversight of AI models.
Validate performance – Business leaders need confidence that AI models perform as expected and are in line with business strategy and regulatory compliance. This requires increased transparency, which can be achieved through a rigorous validation process. Validation includes model testing, assessing whether documentation adequately describes the theory and design of the AI algorithm, and ongoing monitoring.
Deloitte uses a three-pronged approach to enabling AI governance: