Abstract
Financial institutions use credit scoring to assess loan applicants’ creditworthiness; this is a
critical component of their decision-making processes. This study addresses the challenge of
black-box models in credit scoring by applying explainable artificial intelligence techniques to
enhance transparency. To achieve this, machine learning models such as logistic regression,
random forest and eXtreme Gradient Boosting (XGBoost) were developed and evaluated for
creditworthiness prediction. Additionally, the SHapley Additive exPlanations (SHAP) and local
interpretable model-agnostic explanations (LIME) models were used to enhance model interpretability
by identifying key features influencing predictions. The XGBoost (accuracy 84%)
and random forest (accuracy 85%) models outperformed logistic regression (74%) in predictive
accuracy. Features such as credit history and the presence of a bank account were highlighted
as influential in specific low-risk predictions. The interpretability offered by SHAP mitigates
the black-box nature of artificial intelligence, fostering trust in credit scoring.