Posted: 17 May 2021 8 min. read

Key questions banks should ask before adopting AI/ML

A blog post by Chida Sadayappan, lead specialist, cloud, data, and machine learning, Deloitte Consulting LLP; and Ishita Vyas, cloud strategy senior consultant, Deloitte Consulting LLP.

 

Either due to changing customer needs, fluctuating regulatory and compliance standards, or peer pressure (or to remain relevant amid unprecedented pandemic times), banks have begun to realize the importance of adopting artificial intelligence (AI) and machine learning (ML). Whether banks are just starting their AI/ML journey (beginners) or have come far, it is essential for them to check whether they are asking the right questions:

  • Are we ready to adopt AI/ML? 
  • Should we buy, build, or buy, then build?
  • How do we establish the right culture and proper governance to adopt AI/ML?
Are we ready to adopt AI/ML?

Before taking the first step toward implementing AI technologies on a larger scale, banks need to consider two things: 

Unlocking value from non-digitally available data: Since data is the foundational layer from which information and insights can be generated, it is essential to extract value from the data gathered from nondigital assets and human-based or document-oriented processes as well. To achieve this, banks need to begin by digitizing processes and documents and automating existing redundant manual processes.

Revamping legacy systems with data-focused strategies: Many banking institutions have been around for more than two decades, and as such have a weak portfolio of tightly coupled legacy systems. This has been a constant obstacle to banks’ deployment of newer technologies. It's unrealistic to replace these legacy systems in one swoop; instead, banks should implement a phased approach, starting with better data management and governance; rationalizing IT architecture; adopting cloud for gaining better computing power to enable AI/ML; and embracing DevOps and DevSecOps to automate developments, assuring continuous delivery, innovation, and monitoring.

Transforming old processes and modernizing legacy technologies is easier said than done. However, this is a crucial step in the transformation journey.

Should we build or buy?

Once banks have overcome the initial obstacles, it's time to answer the million-dollar question: build or buy?

Banks need to set an ultimate goal for AI/ML if they anticipate becoming AI-first organizations or want to transform low-hanging, cherry-picked use cases that will generate instant value.

Buy: AI/ML technology acts as a catalyst to execute routine and repetitive back- or middle-office tasks such as processing, reconciliation, and AML more effectively and efficiently, but a DIY AI model for such use cases is not recommended. These applications will deliver quick value, but the time invested in hiring and training the right talent and investment in supporting infrastructure would be huge compared with buying an off-the-shelf solution. Even if banks choose to build their own AI with open-source tools, it can cost millions of dollars and can take months to train ML algorithms to do what most vendors have already achieved. 

Commercial AI platforms not only allow teams to complete one data project from start to finish, but also introduce efficiencies all over. This includes features like cutting time spent in cleaning data, smoothing production issues, and avoiding reinventing the wheel when deploying models on a daily basis or building in the documentation and best practices for enabling reproducibility.

According to Anaconda's blog, PNC Bank has been working with the AI vendor since it started its AI journey to overhaul its data science infrastructure for Python and R. Further, Anaconda reports that PNC is currently able to build machine learning models in-house, leveraging Anaconda’s open-source platform for use cases like predicting losses, protecting the bank, and setting prices, and the Management Information Systems group also began making applications in Python that help with banking operations. 

Build: Even though buying an off-the-shelf solution or leveraging open-source tools can be quick and convenient, there are times when banks have no choice but to build the model from scratch. Drivers may include the following:

  • The use case is so unique that no commercial AI product or tool is available in the market.
  • Gaining full ownership of the code and model is of utmost importance, as per dictated regulations.
  • A bank is going for a long-term commitment and perceives deployment of AI/ML technology as a differentiation, and also doesn’t want to share its competitive edge with others via a vendor’s pooled data lake.

Buy, then build: It is also important for banks to identify where they stand in their AI/ML transformation journey and how mature their use cases are. If they have just commenced on the AI/ML path, with use cases like automation bots (RPA), AI assistants, and computer vision, it is better for them to get their feet wet by buying and deploying off-the-shelf AI tools or leveraging open-source AI platforms like TensorFlow or scikit-learn and gradually start building in-house AI/ML capabilities for more unique use cases.

This question is gradually shifting toward “buy or buy, then build,” as this strategy offers quite a few advantages:

  • Smaller initial investment
  • Prompt kick-start in implementing AI/ML technologies
  • Buying time to hire new data scientists or train internal employees
  • Learning from any mistakes that occurred during vendor POCs 
  • More and better accessible data (considering vendor data pool) to use for in-house AI/ML models
How do we establish the right culture and proper governance to adopt AI/ML?

As the pace of AI/ML adoption among banks increases, limitations have begun to surface, such as reluctance from employees, governance of models in operation due to a constantly changing regulatory environment, getting models fully integrated into the production system, and ongoing management and maintenance of models. This calls for better governance and a transformed operating model:

Right culture: Organizations may face resistance and fear from employees—a fear of becoming obsolete—when planning to implement AI/ML. Hence, a major cultural shift is vital. The role of leaders should be to inspire their employees to adopt AI by communicating and demonstrating how AI will enable them to do their jobs better. Leaders can focus their use cases on reducing work overload, including repetitive tasks in call centers, or expanding business in a way that adds to employment (e.g., through new product innovation or by entering new markets).

For example, a bank created a document for relationship managers that highlighted how combining their expertise and skills with AI’s tailored product recommendations could improve customers’ experiences and increase revenue and profit. The AI implementation plan also included sales competitions based on the use of new tools; the winners’ achievements were showcased in the CEO’s monthly newsletter to employees.

Proper governance: Gartner has predicted that, by 2023, 60% of organizations with more than 20 data scientists will require a professional code of conduct incorporating ethical use of data and AI. Now that AI adoption is no longer optional, leaders need to think about AI governance even before implementing the technology.

Deloitte's Trustworthy AI framework consists of six dimensions for organizations to consider when designing, developing, deploying, and operating AI systems. The framework helps manage common risks and challenges related to AI ethics and governance, including: 

  • Fair and impartial use checks. A common problem with AI is the need to avoid human bias in the coding process. Banking institutions need to determine what constitutes fairness, actively identify biases in their algorithms and data, and implement control measures to avoid unexpected outcomes.
  • Implementing transparency and explainable AI. For AI to be trustworthy, all participants have a right to understand how their data is being used and how the AI system makes decisions. Banks must be prepared to create verifiable algorithms, attributes, and correlations.
  • Responsibility and accountability. A strong AI system must include guidelines that clearly state who is responsible for making decisions. This question embodies an unknown aspect of AI: Are developers, testers, or product managers responsible? Do all involved understand the system’s inner workings? Who among the C-suite will be ultimately responsible?
  • Putting proper security in place. To be reliable, AI needs to be protected from risks, including cybersecurity risks, that may cause physical and/or digital harm. Organizations need to carefully check and eliminate all types of risks and communicate these risks to users.
  • Monitoring for reliability. For AI to achieve widespread adoption, it must be as robust and reliable as the traditional systems, processes, and people it is augmenting. Banks need to ensure their AI algorithms produce the expected results for each new data set. They also need to establish processes for any issues and inconsistencies in expected outcomes.
  • Safeguarding privacy. Trustworthy AI must comply with data regulations and only use data for its stated and agreed-upon purposes. Banks should ensure that consumer privacy is respected, customer data is not leveraged beyond its intended and stated use, and consumers can opt in and out of sharing their data.

To summarize, irrespective of where banks are in their AI/ML transformation journey, it’s advisable for them to check whether they are asking the right questions and taking care of all their traditional, yet foundational pillars: people, process, governance, and technology.

Interested in exploring more on cloud?

Get in touch

David Linthicum

David Linthicum

Managing Director | Chief Cloud Strategy Officer

As the chief cloud strategy officer for Deloitte Consulting LLP, David is responsible for building innovative technologies that help clients operate more efficiently while delivering strategies that enable them to disrupt their markets. David is widely respected as a visionary in cloud computing—he was recently named the number one cloud influencer in a report by Apollo Research. For more than 20 years, he has inspired corporations and start-ups to innovate and use resources more productively. As the author of more than 13 books and 5,000 articles, David’s thought leadership has appeared in InfoWorld, Wall Street Journal, Forbes, NPR, Gigaom, and Lynda.com. Prior to joining Deloitte, David served as senior vice president at Cloud Technology Partners, where he grew the practice into a major force in the cloud computing market. Previously, he led Blue Mountain Labs, helping organizations find value in cloud and other emerging technologies. He is a graduate of George Mason University.