Transaction monitoring is a critical component of the AML/CFT framework. Subject Persons (“SPs”) require robust transaction monitoring processes to detect suspicious transactions executed through their products and services; however, this is one of the more complex, time consuming and costly processes within the AML/CFT framework, especially if done poorly. This is in part due to the interconnectedness of transaction monitoring processes with several other processes within the wider AML/CFT framework, particularly the risk assessment, KYC and due diligence activities, and in part due to their dependencies on data, systems, tools, and human resources.
In this article, we explore how the challenges associated with developing and maintaining transaction monitoring processes could be managed intelligently, with the objective of deriving better outcomes, not only in terms the effectiveness and efficiency from an AML/CTF risk management perspective, but also in terms of how this process could bring value to the organisation by contributing to the achievement of its strategic goals. We share our point of view of what we believe to be the key components of a robust transaction monitoring framework, the common pitfalls we see in our market and potential solutions.
Over time SPs have implemented different systems and processes with the introduction of new products and/or services, and operational changes. This often results in disparate sources of data being collected and stored across the organisation, which often leads to inconsistent and poor data quality. Older generation transaction monitoring systems have functional limitations that do not allow the flexibility to design dynamic rule sets and may not leverage emerging technologies that can significantly enhance anomaly detection capabilities. Processing can be complex and cumbersome due to archaic data modelling and storage methodologies and technologies.
Data quality is a cornerstone of an effective transaction monitoring system. SPs must rely on the accuracy, and completeness of data, without which potentially illicit transactions can go under the radar and remain undetected. An example of poor data quality is the incorrect tagging of customers to different business segments e.g., retail customers being tagged as SME customers and vice versa. Poor data quality significantly restricts the ability to design and configure rules that directly address relevant typologies and may also lead to an inability to rely on the automated alert generation process. This results in a significant increase in the operational effort and cost due to the manual activity required to give integrity to the results generated by the system.
Due to the poor quality of data being fed into transaction monitoring systems, coupled with the limitations of dynamic monitoring, SPs are increasingly faced with the generation of large volumes of alerts. SPs engage large teams of resources to review and analyse alerts, which in turn increases the operational risk and cost of compliance, with little to no effect on the primary objective, i.e., the fight against financial crime. Even with a large pool of people reviewing alerts, most (if not all) SPs still have significant alert backlogs, which again defeats the purpose of transaction monitoring. What good is there in identifying illicit activity weeks (if not months) after it has taken place?
In an attempt to keep pace with the number of alerts being generated and to clear alert backlogs, many SPs have been compromising on the quality of investigations completed prior to closing an alert. Often, alerts are closed without the necessary justification and/or without any supporting documentation to evidence the conclusion. In other instances, SPs have sought to file defensive SARs/STRs. This is not looked at favourably by regulators, as the focus should be on fighting financial crime, rather than just-in-case approaches to maintain compliance.
The transaction monitoring process is both critical and complex. Managing it effectively requires a structured and iterative process of continuous development and fine tuning. Designing a transaction monitoring framework that provides the structure, methodology and governance allows organisations to get the most value out of their transaction monitoring framework by maximising risk management outcomes at the most efficient cost.
Our transaction monitoring framework model, as illustrated in Figure 1, is an iterative model based on four key stages within the transaction monitoring lifecycle: Develop, Configure, Operate and Optimise; wrapped by a layer of governance that together form the framework model.
To achieve transaction monitoring optimisation, SPs can consider the introduction of new technologies such as Artificial Intelligence. There is a growing consensus that adopting technological innovations including robotics, cognitive automation, machine learning, data analytics and AI can significantly enhance compliance processes, including transaction monitoring. In tandem, regulators have also displayed an increasing openness to SPs implementing such techniques; for example, in April 2023, the FIAU issued a guidance note that encouraged the consideration of innovative technologies in transaction monitoring. SPs are designing AI tools to improve the identification of suspicious transactions and to refine the screening of Politically Exposed Persons (PEPs), sanctioned individuals and organisations. Until recently, SPs had heavily relied on rules-based systems for their transaction monitoring, which presented several limitations. However, SPs are now turning to machine learning to benefit from significant improvements in reducing the volume of false positives, increasing efficiency, and lowering the associated cost of compliance.
Among the variety of different machine learning uses, the transaction monitoring process presents a significant opportunity for application due to the solution’s ability to make judgments and to identify behavioural patterns.
Machine learning algorithms can reduce false positive alerts by detecting suspicious behaviour and risk-classifying alerts as being higher or lower risk following an RBA. Advanced machine learning techniques allow resources to focus on high-risk activity by automating the detection of alerts that are likely to require investigation and auto-closing alerts that are non-suspicious.
Anomaly detection and identification of transactional patterns
Machine learning techniques can identify patterns, data anomalies and relationships amongst suspicious entities that could have gone unnoticed under a rules-based approach.
Transaction data analytics may also provide invaluable insight about customer behaviour that may be used to identify upselling opportunities, thereby contributing to the achievement of the SP’s commercial goals.
To conclude, the implementation of an iterative framework provides SPs a structured approach to continuous optimisation of their transaction monitoring activities. This translates into a significant improvement in the SP’s ability to identify suspicious activity whist getting far better value from the cost of compliance.