Skip to main content

Only 13% of global FSI polled are ready to implement trustworthy AI, Deloitte report reveals

Deloitte’s AI at a crossroads report finds that less than 6 in 10 employees in a Malaysian organisation polled have the ethical and legal compliance knowledge to use AI solutions.

KUALA LUMPUR, 7 APRIL 2025 – Generative AI (GenAI) has been creating waves of rapid transformation in industries across the globe. However, businesses continue to face challenges in navigating the risks from rapid AI adoption. Deloitte’s AI at a crossroads report revealed that over half (51%) of Malaysian companies polled continue to face technology implementation challenges, with 37% agreeing that they continue to have an insufficient understanding of the technology and its potential.

With investment in AI across the Asia Pacific region expected to grow fivefold by the end of the decade, reaching $117 billion USD by 2030, GenAI has quickly become the region’s fastest-growing enterprise technology. However, this productivity comes at a cost. While interest for GenAI adoption remains high, 9 in 10 Malaysian companies opined that it also brings security vulnerabilities such as cyber security risks.

“While AI solutions offer powerful productivity tools, this also increases the possibility of data breaches or regulatory fines if these tools are not managed properly,” shared Dr Justin Ong, Financial Services Industry Leader, Deloitte Malaysia. “The Malaysian Communications and Multimedia Commission’s (MCMC) introduction of a regulatory framework for social media services and internet messaging service providers is a step in the right direction, but more needs to be done. Less than 1 in 10 organisations across the Asia Pacific currently have the governance structures necessary to achieve trustworthy AI, which Deloitte has also adopted in our own GenAI initiatives under the Trustworthy AI Framework.”

To assist organisations in taking practical steps to achieving trustworthy AI, Deloitte has introduced an AI Governance Maturity Index. Based on 12 key indicators across five pillars, 91% of organisations were classified as only having ‘Basic’ or ‘In progress’ AI Governance structures in place, highlighting substantial room for improvement in AI governance. The report further revealed that less than 60% of Malaysian organisation employees polled have the required level of skills to use AI solutions in an ethically and legally compliant way.

While the financial services industry (FSI) is one of the leading adopters of digital innovation, the report revealed that only 13% of FSI across the globe were deemed as ‘Ready’ for trustworthy AI. As demand for financial services continue to grow, particularly among younger and more tech literate consumers, good governance is key to ensuring the industry’s long-term advancement, or risk losing out.

“Despite the industry’s efforts to grow cybersecurity measures in recent times, we’ve seen that 42% of financial services organisations polled reported an increase in incidents within the last financial year,” Dr Justin continues. “As a sector that holds sensitive financial information, the financial services industry must increase governance efforts to facilitate the necessary security levels required.”

One of the key risks identified by 84% of Malaysian respondents is the surveillance implications of AI, where pervasive data collection can lead to an erosion of privacy rights. With AI systems being increasingly deployed for security measures such as tracking and facial recognition, there are growing concerns about data privacy and corporate surveillance practices, requiring safeguards to prevent potential misuse of personal information.

Beyond incorporating AI into companies’ business processes, having clearly identified roles within an organisation that are accountable to manage AI standards helps to ensure any emerging AI-related issues are addressed appropriately. For most organisations surveyed, this responsibility lies with senior leadership, with 91% of organisations having a board member or C-suite executive explicitly responsible. In Malaysia, nearly 40% of Malaysian organisations surveyed do not have a system for employees to raise AI-related concerns, suggesting an increased risk for cybersecurity risks such as legal liabilities or harmful AI practices.

Deloitte’s survey involved nearly 900 senior leaders from 13 locations across the Asia Pacific region in one of the most comprehensive stocktakes of AI governance maturity levels to date. Respondents were specifically targeted to be in senior roles like chief risk officers, chief compliance officers and chief data officers across various sectors, including public, private and not-for-profit, and a range of industries (including finance, education, health and technology).