In November 2021, the European Banking Authority (EBA) published a discussion paper (DP) to seek feedback on the use of Machine Learning (ML) in the context of IRB models, focusing on the challenges and opportunities faced by practitioners. In the follow-up report, the EBA summarised its main conclusions on ML in the context of the internal ratings-based (IRB) models. Does this pave the way for adopting advanced ML techniques for IRB modelling?
Key takeaways
The key points from the follow-up report on the use of ML in the context of IRB are highlighted below.
Selective use of ML for IRB
The potential challenges and benefits of using ML techniques were evaluated for the different steps of IRB modelling, namely risk differentiation, risk quantification and validation. Industry feedback showed that most of the banks intend to use ML techniques in risk differentiation and within that mostly for probability of default (PD) modelling. The PD models developed would be used to enhance credit risk decisioning, with the potential to extend into IRB and capital management, subject to concerns regarding regulatory approval. Apart from PD model development, some banks are using ML for model validation and collateral valuation. Most respondents mentioned that the specific skills and technical knowledge needed to carry out model development and validations when using ML techniques, is not consistently in place.
Our point of view:
Complexity of ML techniques
The DP highlighted the key challenges when developing and validating IRB models using ML techniques. These include (i) statistical issues, (ii) skill-related issues, and (iii) interpretation / explainability issues.
Figure 1 below shows that the most commonly used interpretability tools being Shapley values (40% of respondents), followed by graphical tools (20%), enhanced reporting and documentation of the model methodology (28%) and sensitivity analysis (8%).
Figure 1: Measures to ensure explainability of ML techniques
Source: EBA feedback from CP on DP on ML for IRB models, 14 respondents.
Our point of view:
Interaction with regulatory frameworks
When incorporating ML techniques in credit risk modelling, the decision should not only be based on prudential regulations, but also reflect ethical and legal aspects, including consumer and data protection requirements (i.e., consider the use of ML techniques relative to the General Data Protection Regulation (GDPR) and Artificial Intelligence (AI) Act requirements). EBA’s report clarifies the alignment of these frameworks with the regulatory framework and highlights concerns about legal uncertainties under the AI Act.
Our point of view:
Principle-based recommendations
The EBA have set out a principle-based approach which IRB models developed using ML techniques should adhere to. The principles are intended to make clear how to adhere to the regulatory requirements set out in the Capital Requirements Regulation (CRR) for IRB models. The recommendations apply where ML models are used for risk differentiation and risk quantification purposes. In line with the regulatory expectations for model development, the key principles that the EBA have highlighted are as follows:
Our point of view:
These principles are consistent with the IRB and MRM frameworks and are of a form that should help management teams in banks to inform changes to model development and model validation frameworks, which are consistent with these principles.
Way forward
The principle-based recommendations provide banks with a view on the use of ML techniques and how it complies with regulatory IRB expectations. However, the fast-paced developments in ML techniques require that authorities continuously monitor the implementation of these techniques. The EBA plans to monitor the developments in the ML field and may amend the principle-based recommendations, if required.
Our point of view:
Management teams should conduct a review of their framework and consider how and when ML techniques may need to be used to inform credit risk model development and/or model validation. It may not be straight-forward to adopt advanced ML techniques in IRB models given the complexities involved relating to interpretability and explainability of these techniques, along with skills around model development and conducting robust independent validation of IRB models.