AI has evolved into a key business tool, boosting efficiency and innovation. Thai fintechs use AI to approve loans for the unbanked by analyzing alternative data, increasing precision and inclusivity. This growth calls for strong governance to manage AI risks and ensure responsible use.
Ing Houw Tan
Assurance Leader
Deloitte Thailand
Artificial Intelligence (“AI”) has rapidly evolved from a niche technology to a core business capability. Over the past year, its adoption has surged across industries, driven by the promise of efficiency, innovation, and competitive advantage.
In a real-life example, a Thai-based fintech company with an AI-powered digital lending app has successfully approved loans for over 30% of applicants previously rejected by banks due to a lack of formal income statements or a credit history. The AI-model has enabled them to reach customers with more precision and less bias and has also increased the overall recovery rate of the loan portfolio.
A fintech company in Indonesia is also using AI to approve loans for the unbanked. They analyse thousands of alternative data points, like phone usage and digital transactions, instead of traditional credit scores. This enables them to assess creditworthiness for those without a banking history and bias. The result is a more inclusive lending process, granting financial access to millions previously excluded by traditional banking systems.
This rapid growth drives the need to develop consistent and reliable governance frameworks that empower organizations to confidently manage emerging risks associated with AI, which are multifaceted.
Unreliable AI outcome can lead to lawsuits, regulatory penalties, reputational damage, and even the erosion of shareholder value. As a result, organizations are under increasing pressure from management and governance bodies to ensure that AI operates as intended, aligns with strategic goals, and is used responsibly. To address these challenges, organizations must adopt a proactive approach. This includes aligning AI solutions with business objectives, reducing bias in data and machine learning outputs, and fostering a culture of transparency and explainability.
AI Risk Management Considerations
An AI governance framework is a structured system of policies, standards and processes designed to guide the entire lifecycle of AI. It guides the responsible development of AI and its primary goal is to maximize benefits while mitigating significant risks such as bias and privacy violations, ensure compliance with evolving regulations, foster public trust and drive innovation.
Designing a responsible AI governance framework is complex, especially in a landscape where best practices are still emerging. Successful AI risk management programs often share several key foundational principles.
First, there must be a balance between innovation and risk management. AI governance should not be perceived as a barrier to innovation. Instead, awareness campaigns can help stakeholders understand how risk management enhances trust and long-term value. Second, consistency with existing risk management practices—such as model risk management (MRM)—can streamline implementation and improve efficiency.
Stakeholder alignment is another critical factor. Engaging cross-functional teams, including cybersecurity, IT, legal, and compliance, ensures that governance frameworks are comprehensive and well-supported. Additionally, organizations must be prepared to manage regulatory changes. As AI regulations evolve, strong change management practices will be essential to maintain compliance and build a trustworthy AI environment.
Ultimately, AI risk management should be integrated into the broader enterprise risk framework. Leveraging existing structures while tailoring them to the unique challenges of AI can help organizations build resilient and adaptable governance systems.
How Organizations Can Get Started on Their AI Governance Journey
Deloitte’s Trustworthy AI framework offers a structured approach to implementing ethical AI practices and mitigating risks throughout the AI lifecycle. Embarking on an AI governance journey requires a structured and holistic approach. Organizations can begin by focusing on three key areas: design, process, and training.
By following these steps, organizations can lay a strong foundation for responsible AI use and governance.
Establishing Trustworthy AI Governance
At the heart of effective AI governance lies a commitment to trustworthiness. Deloitte’s Trustworthy AI framework outlines seven core principles that form the foundation of a sound AI risk management program:
By embedding these principles into their AI governance frameworks, organizations can foster trust among stakeholders, ensure compliance with evolving regulations, and unlock the full potential of AI in a responsible and sustainable way.
Looking ahead to a trustworthy future
The journey to trustworthy AI starts now—by embracing responsible governance, organisations can lead with confidence, unlock transformative value, and shape a future where innovation and integrity go hand in hand.