Banner Banner

Explainability can foster trust in artificial intelligence in geoscience

Jesper Sören Dramsch
Monique M. Kuglitsch
Miguel-Ángel Fernández-Torres
Andrea Toreti
Rustem Arif Albayrak
Lorenzo Nava
Saman Ghaffarian
Ximeng Cheng
Jackie Ma
Wojciech Samek
Rudy Venguswamy
Anirudh Koul
Raghavan Muthuregunathan
Arthur Hrast Essenfelder

February 05, 2025

Uptake of explainable artificial intelligence (XAI) methods in geoscience is currently limited. We argue that such methods that reveal the decision processes of AI models can foster trust in their results and facilitate the broader adoption of AI.

Artificial intelligence (AI) offers unparalleled opportunities for analysing multidimensional data and solving complex and nonlinear problems in geoscience. However, as the complexity and potentially the predictive skill of an AI model increases, its interpretability — the ability to understand the model and its predictions from a physical perspective — may decrease. In critical situations, such as scenarios caused by natural hazards, the resulting lack of understanding of how a model works and consequent lack of trust in its results can become a barrier to its implementation. Here we argue that explainable AI (XAI) methods, which enhance the human-comprehensible understanding and interpretation of opaque ‘black-box’ AI models, can build trust in AI model results and encourage greater adoption of AI methods in geoscience.