Introduction

Machine learning models in finance are often black-boxes: deep neural networks predict asset returns, but the reasoning is opaque. Regulators and risk managers demand explanations: why did the model predict a price move? LLMs can generate natural-language explanations of model predictions, making black-boxes interpretable and auditable.

Explanation-as-Text Problem**

Bridging the Interpretability Gap**

Technical explanations (SHAP values, attention weights) are precise but inaccessible to non-experts. English explanations are accessible but risk oversimplification. LLMs can translate between domains: "Input SHAP values → generate English explanation suitable for business stakeholder."

Post-Hoc vs. Built-In Explanations**

Generate explanations after model prediction (post-hoc): observe prediction and inputs, generate explanation. Advantage: applies to any existing model (no retraining needed). Disadvantage: explanations are speculative, not mechanistic.

Explanation Generation Workflow**

Input to LLM**

Provide: model prediction (e.g., "S&P 500 up 1.2% tomorrow"), input features (recent momentum, VIX, yield curve), feature importance scores (SHAP or similar). LLM generates natural-language explanation.

Output**

"The model predicts a 1.2% rally driven primarily by strong recent momentum (5-day return +0.8%, SHAP importance 0.4) and low volatility (VIX 15, SHAP importance 0.3). These factors historically precede continued strength. However, the inverted yield curve (SHAP importance -0.2) suggests some caution."

Factual Accuracy**

Grounding in Data**

Ensure explanations reference actual features and values. LLM should state: "VIX is 15" only if VIX is actually 15. Prompt engineering for grounding: "Explain using only the following feature values: [data]." Prevents hallucinated explanations.

Consistency with Model**

Validate explanation against model logic. If explanation says "momentum is bullish," but model's momentum weight is negative (bearish), explanation is wrong. Automated checks flag inconsistencies.

Case Study: Equity Return Prediction**

Quant fund has a neural network predicting 1-day equity returns using 200 features (price signals, sentiment, macro). Black-box model achieves 52% accuracy (vs. 50% random). But traders don't understand which signals drive predictions.

Solution: for each prediction, compute SHAP values showing top 5 contributing features. Feed to fine-tuned LLM: "Explain why the model predicts +0.5% for Apple given: momentum up 2%, sentiment up 1.5%, insider buying, earnings miss priced in. SHAP values: [...]." LLM generates explanation suitable for trader briefing.

Traders now understand model decisions, can assess reasonableness, and identify when model goes wrong.

Regulatory and Compliance Applications**

Model Risk Governance**

Regulators ask: explain your trading model. Provide: (1) technical documentation, (2) English explanation of a sample prediction. LLM-generated explanations make model behavior transparent and auditable.

Fair Lending and Discrimination**

In credit or insurance, models must be fair (not discriminating by protected attributes). Explanations help verify fairness: "The model denied credit due to low income (feature X); note that race is not a feature, so the decision is fair." Explanations build confidence in non-discrimination.

Interactive Explanations**

Counterfactual Explanations**

Generate: "If momentum were up 1% instead of 2%, the model's prediction would be [X]." Counterfactuals help understand sensitivity to features.

Comparative Explanations**

Compare predictions for two similar assets: "Both stocks are up 1% today. Model predicts Apple up 0.5% but Tesla up 1% due to Tesla's higher sentiment (0.8 vs. 0.5) and growth expectations (1.2 vs. 0.8)." Comparative framing clarifies differences.

Limitations**

Explanations Aren't Causal**

LLM-generated explanations are correlational, not causal. "Momentum is up; model predicts rally" doesn't mean momentum causes the rally. Causal claims require causal inference techniques (RCTs, structural models). Explanations should say "associated with," not "causes."

Selective Feature Disclosure**

Models may have learned proprietary signal relationships. Explaining each prediction risks disclosing strategy. Balance transparency (needed for governance) against confidentiality (needed for competition). Aggregate explanations; avoid signal-by-signal disclosure.

Conclusion**

LLMs translate model predictions into business-readable explanations. Risk managers, traders, and regulators gain insight into model behavior. With proper grounding in data and validation, LLM explanations make black-boxes white. For ML-driven trading firms, explanation generation is essential for governance, compliance, and building stakeholder trust in models.