Explainable AI Systems in Credit Risk Assessment

Explainable AI Systems in Credit Risk Assessment
Photo by Steve Johnson / Unsplash

a. Topic

This Master Thesis focuses on developing and applying Explainable AI (XAI) systems in credit risk assessment. It examines how XAI methodologies enhance the interpretability of machine learning (ML) models used in credit scoring. The study specifically explores two XAI techniques: Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), to provide clear and understandable explanations for ML model predictions.


b. Relevance

Improving interpretability in ML-based credit scoring leads to more transparent, fair, and accountable financial decision-making. This increases client trust, eliminates lending biases, and complies with regulatory requirements for explicit AI decision routes. Clearer insights into AI decision-making help firms optimize their risk management methods, resulting in more robust and equitable financial services.

c. Results

The study developed highly accurate ML models using XGBoost and Neural Networks, demonstrating the effectiveness of LIME and SHAP in enhancing model interpretability. Both techniques provided explanations consistent with financial experts' reasoning, aligning well with financial logic.


d.  Implications for practitioners

  • Enhanced transparency and trust in ML-based credit scoring systems.
  • Increased ability to provide clear, actionable insights to stakeholders.
  • Enhanced decision-making processes through interpretable and justifiable predictions.

e. Methodology

Using the Lending Club dataset, the study developed ML models for predicting loan defaults with XGBoost and Neural Network classifiers due to their superior predictive capabilities. LIME and SHAP were employed to address interpretability: LIME for local explanations highlighting key features influencing individual predictions, and SHAP for global explanations demonstrating feature importance across the dataset.