Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence in environmental and Earth system sciences: explainability and trustworthiness
7
Zitationen
3
Autoren
2025
Jahr
Abstract
Abstract Explainable artificial intelligence (XAI) methods have recently emerged to gain insights into complex machine learning models. XAI can be promising for environmental and Earth system science because high-stakes decision-making for management and planning requires justification based on evidence and systems understanding. However, an overview of XAI applications and trust in AI in environmental and Earth system science is still missing. To close this gap, we reviewed 575 articles. XAI applications are popular in various domains, including ecology, engineering, geology, remote sensing, water resources, meteorology, atmospheric sciences, geochemistry, and geophysics. XAI applications focused primarily on understanding and predicting anthropogenic changes in geospatial patterns and impacts on human society and natural resources, especially biological species distributions, vegetation, air quality, transportation, and climate-water related topics, including risk and management. Among XAI methods, the SHAP and Shapley methods were the most popular (135 articles), followed by feature importance (27), partial dependence plots (22), LIME (21), and saliency maps (15). Although XAI methods are often argued to increase trust in model predictions, only seven studies (1.2%) addressed trustworthiness as a core research objective. This gap is critical because understanding the relationship between explainability and trust is lacking. While XAI applications continue to grow, they do not necessarily enhance trust. Hence, more studies on how to strengthen trust in AI applications are critically needed. Finally, this review underlines the recommendation of developing a “human-centered” XAI framework that incorporates the distinct views and needs of multiple stakeholder groups to enable trustworthy decision-making.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.310 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.