SHAP vs LIME for Explaining Credit-Scoring Models
Introduction
Credit-scoring models determine loan approvals, affecting individuals' financial lives. Regulators require explainability: why was an application rejected? SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) are popular explanation methods, but have trade-offs in accuracy and interpretability. Comparing them informs selection for credit modeling applications.
SHAP: Theoretically Grounded
SHAP computes Shapley values from cooperative game theory: each feature's contribution to the prediction. SHAP is theoretically principled (satisfies desirable axioms), consistent across models. Computationally expensive for large datasets. Provides global feature importance and local explanations. High quality explanations but slower computation.
LIME: Fast and Local
LIME explains individual predictions by fitting simple linear models around the prediction. Fast (milliseconds), model-agnostic, intuitive. Approximation quality depends on neighborhood definition. Less theoretically grounded than SHAP. Better for real-time explanation systems.
Credit Scoring Application
For credit decisions, regulators often require explanations delivered in real-time during loan processing. LIME's speed is advantageous. For post-hoc regulatory audits, SHAP's theoretical rigor is valuable. Hybrid approach: LIME for real-time, SHAP for audits.
Conclusion
Strategic use of SHAP and LIME—choosing based on context—improves credit model transparency and regulatory compliance.