Banner Banner

Explaining Predictive Uncertainty by Exposing Second-Order Effects

Florian Bley
Sebastian Lapuschkin
Wojciech Samek
Grégoire Montavon

November 21, 2024

Explainable AI has brought transparency to complex ML black boxes, enabling us, in particular, to identify which features these models use to make predictions. So far, the question of how to explain predictive uncertainty, i.e., why a model ‘doubts’, has been scarcely studied. Our investigation reveals that predictive uncertainty is dominated by second-order effects, involving single features or product interactions between them. We contribute a new method for explaining predictive uncertainty based on these second-order effects. Computationally, our method reduces to a simple covariance computation over a collection of first-order explanations. Our method is generally applicable, allowing for turning common attribution techniques (LRP, Gradient
Input, etc.) into powerful second-order uncertainty explainers, which we call CovLRP, CovGI, etc. The accuracy of the explanations our method produces is demonstrated through systematic quantitative evaluations, and the overall usefulness of our method is demonstrated through two practical showcases.