XAI - Credit Risk Analysis

Authors

  • Nilesh Patil, Sridhar Iyer, Chaitya Lakhani, Param Shah, Ansh Bhatt, Harsh Patel, Dev Patel

Abstract

This paper delves into the integration of Explainable AI (XAI) techniques with machine learningmodels for credit risk classification, addressing the critical issue of model transparency in financialservices. We experimented with various models, including Logistic Regression, Random Forest,XGBoost, LightGBM, and Artificial Neural Networks (ANN), on real-world credit datasets to predictborrower risk levels. Our results show that while ANN achieved the highest accuracy at 95.3%,Random Forest followed closely with 95.23%. Logistic Regression also performed strongly with anaccuracy of 94.68%, while XGBoost and LightGBM delivered slightly lower accuracies of 94.4% and94.37%, respectively. However, the superior accuracy of these complex models, particularly ANN,comes with a trade-off: reduced transparency, making it difficult for stakeholders to understand thedecision-making process. To address this, we applied XAI techniques such as SHAP (SHapleyAdditive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) to provideclear and understandable explanations for the predictions made by these models. This integrationnot only enhanced model interpretability but also built trust among stakeholders and ensuredcompliance with regulatory standards. This study illustrates how XAI serves as an effective mediatorbetween the precision of sophisticated machine learning algorithms and the demand for clarity inevaluating credit risk. XAI offers a well-balanced method for managing risk in finance, harmonizingthe need for both accuracy and interpretability.

Downloads

Published

2024-09-14

How to Cite

Nilesh Patil, Sridhar Iyer, Chaitya Lakhani, Param Shah, Ansh Bhatt, Harsh Patel, Dev Patel. (2024). XAI - Credit Risk Analysis. International Journal of Communication Networks and Information Security (IJCNIS), 16(4), 428–442. Retrieved from https://ijcnis.org/index.php/ijcnis/article/view/7080

Issue

Section

Research Articles