Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, James Robins (2017)
Link to Paper
Link to documentation
A joint article about causality and interpretable machine learning with Eleanor Dillon, Jacob LaRiviere, Scott Lundberg, Jonathan Roth, and Vasilis Syrgkanis from Microsoft.
Predictive models e.g. XGBoost coupled with ML Interpretability models e.g. SHAP are powerful. But they are only useful for:
- Predictions
- Relationship between inputs and outcomes
- Diagnosis of potential problems
Predictive models should not be used for 'decision making'. Since predictive models are not causal!! Predictive models implicitly assume that everyone will keep behaving the same way in the future, and therefore correlation patterns will stay constant. But they do not model behavior.