Skip to main content

Navigating the Future: Building Trust in AI for Responsible Governance

AI has evolved into a key business tool, boosting efficiency and innovation. Thai fintechs use AI to approve loans for the unbanked by analyzing alternative data, increasing precision and inclusivity. This growth calls for strong governance to manage AI risks and ensure responsible use.

Ing Houw Tan

Assurance Leader

Deloitte Thailand

Artificial Intelligence (“AI”) has rapidly evolved from a niche technology to a core business capability. Over the past year, its adoption has surged across industries, driven by the promise of efficiency, innovation, and competitive advantage. 

In a real-life example, a Thai-based fintech company with an AI-powered digital lending app has successfully approved loans for over 30% of applicants previously rejected by banks due to a lack of formal income statements or a credit history. The AI-model has enabled them to reach customers with more precision and less bias and has also increased the overall recovery rate of the loan portfolio.

A fintech company in Indonesia is also using AI to approve loans for the unbanked. They analyse thousands of alternative data points, like phone usage and digital transactions, instead of traditional credit scores. This enables them to assess creditworthiness for those without a banking history and bias. The result is a more inclusive lending process, granting financial access to millions previously excluded by traditional banking systems.

This rapid growth drives the need to develop consistent and reliable governance frameworks that empower organizations to confidently manage emerging risks associated with AI, which are multifaceted. 

Unreliable AI outcome can lead to lawsuits, regulatory penalties, reputational damage, and even the erosion of shareholder value. As a result, organizations are under increasing pressure from management and governance bodies to ensure that AI operates as intended, aligns with strategic goals, and is used responsibly. To address these challenges, organizations must adopt a proactive approach. This includes aligning AI solutions with business objectives, reducing bias in data and machine learning outputs, and fostering a culture of transparency and explainability. 

AI Risk Management Considerations

An AI governance framework is a structured system of policies, standards and processes designed to guide the entire lifecycle of AI. It guides the responsible development of AI and its primary goal is to maximize benefits while mitigating significant risks such as bias and privacy violations, ensure compliance with evolving regulations, foster public trust and drive innovation.

Designing a responsible AI governance framework is complex, especially in a landscape where best practices are still emerging. Successful AI risk management programs often share several key foundational principles.

First, there must be a balance between innovation and risk management. AI governance should not be perceived as a barrier to innovation. Instead, awareness campaigns can help stakeholders understand how risk management enhances trust and long-term value. Second, consistency with existing risk management practices—such as model risk management (MRM)—can streamline implementation and improve efficiency.

Stakeholder alignment is another critical factor. Engaging cross-functional teams, including cybersecurity, IT, legal, and compliance, ensures that governance frameworks are comprehensive and well-supported. Additionally, organizations must be prepared to manage regulatory changes. As AI regulations evolve, strong change management practices will be essential to maintain compliance and build a trustworthy AI environment.

Ultimately, AI risk management should be integrated into the broader enterprise risk framework. Leveraging existing structures while tailoring them to the unique challenges of AI can help organizations build resilient and adaptable governance systems.

How Organizations Can Get Started on Their AI Governance Journey

Deloitte’s Trustworthy AI framework offers a structured approach to implementing ethical AI practices and mitigating risks throughout the AI lifecycle. Embarking on an AI governance journey requires a structured and holistic approach. Organizations can begin by focusing on three key areas: design, process, and training.

  • Design involves conceptualizing and documenting the intended use, objectives, and risks of AI systems. This includes consulting a diverse group of stakeholders, gathering use-case scenarios, and identifying potential sources of adverse outcomes. Understanding current regulatory and legal requirements is also essential at this stage.
  • Process refers to the development, implementation, validation, and ongoing monitoring of AI systems. A clear statement of purpose should guide model development, supported by objective model selection criteria and rigorous testing. Development tests should evaluate model assumptions, stability, bias, and behavior across various input values. Back-testing and cross-validation further ensure model robustness.
  • Training is equally important. AI ethics training for developers and end-users fosters awareness of potential harms and ethical considerations. Using reputable, well-documented data sources and representative datasets ensures fairness. Data quality metrics should be identified and monitored, and efforts should be made to minimize bias through careful feature selection and transformation assessments.

By following these steps, organizations can lay a strong foundation for responsible AI use and governance.

Establishing Trustworthy AI Governance

At the heart of effective AI governance lies a commitment to trustworthiness. Deloitte’s Trustworthy AI framework outlines seven core principles that form the foundation of a sound AI risk management program:

  • Fair and Impartial: Limiting bias in AI outputs is crucial for all models. Organizations should identify and address this bias to prevent unfair outcomes. For example, a generative AI chatbot might perform well in one culture but poorly in another, potentially reducing user trust in the tool and the business itself, impacting reputations and effectiveness.
  • Transparent and Explainable: Stakeholders should understand how their data is used and how AI decisions are made. Algorithms and correlations must be open to inspection. For example, a medical recommendation by a Generative AI system may require notation that it was machine-derived, along with accessible, clear logs or explanations of why that recommendation was made.
  • Accountable: Clear policies must define who is responsible for decisions made or influenced by AI technologies. Whether the enterprise builds a model in-house or accesses one through a vendor, there must be a clear connection between the Generative AI model and the deploying business.
  • Robust and Reliable: AI systems should consistently produce reliable outputs and be capable of learning from both humans and other systems.
  • Private: The data used to train and test AI models may contain sensitive data. Consumer privacy must be respected, with data usage limited to its intended purpose. Users should have control over their data sharing preferences.
  • Safe and Secure: Generative AI content can be misused to create false information affecting businesses, customers, or society. To ensure safety and security, businesses must carefully address cybersecurity risks and align AI outputs with both business goals and user interests.
  • Responsible: AI should be developed and operated in a socially responsible manner, reflecting ethical values and societal norms. Enterprise leaders also need to determine whether an AI use case is a responsible decision for their organisation.

By embedding these principles into their AI governance frameworks, organizations can foster trust among stakeholders, ensure compliance with evolving regulations, and unlock the full potential of AI in a responsible and sustainable way.

Looking ahead to a trustworthy future

The journey to trustworthy AI starts now—by embracing responsible governance, organisations can lead with confidence, unlock transformative value, and shape a future where innovation and integrity go hand in hand.

Did you find this useful?

Thanks for your feedback