Frontier Ledger

The definitive knowledge platform for AI-powered finance

Explainability, Governance & Ethics

20 articles on SHAP, LIME, model interpretability, ethical AI, and responsible AI practices.

1

SHAP vs LIME for Explaining Credit-Scoring Models

2

Counterfactual Explanations in Portfolio Allocation Decisions

3

Measuring Algorithmic Bias in Loan Approval Systems

4

Building "Model Cards" for Financial ML Applications

5

Causal Attribution of Alpha to Model Features

6

Explainer Dashboards for Regulators: UX Best Practices

7

ISO 42001: The New AI Management Standard in Finance

8

Ethical Implications of Predicting Individual Default

9

Interpreting Reinforcement-Learning Policies via Decision Trees

10

Documentation Automation with LLMs for Model Governance

11

Measuring Fairness in Fraud Detection Classifiers

12

Explainable Deep Hedging: Opening the Black Box

13

Red-Team Testing for Financial LLMs—Prompt Injection Scenarios

14

Transparency vs IP Protection: Balancing Trade Secrets

15

Using Differential Privacy to Provide Explanations Without Data Leakage

16

Explainability Requirements Under EU AI Act for Trading Models

17

Audit Trails for Synthetic Data Usage

18

Benchmarking XAI Methods on Volatility Forecasting

19

Debate: Should AI Models Be Allowed to Execute Trades Autonomously?

20

Human-in-the-Loop Oversight Frameworks for Algorithmic Trading