OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 17:54

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Artificial intelligence in environmental and Earth system sciences: explainability and trustworthiness

2025·7 Zitationen·Artificial Intelligence ReviewOpen Access
Volltext beim Verlag öffnen

7

Zitationen

3

Autoren

2025

Jahr

Abstract

Abstract Explainable artificial intelligence (XAI) methods have recently emerged to gain insights into complex machine learning models. XAI can be promising for environmental and Earth system science because high-stakes decision-making for management and planning requires justification based on evidence and systems understanding. However, an overview of XAI applications and trust in AI in environmental and Earth system science is still missing. To close this gap, we reviewed 575 articles. XAI applications are popular in various domains, including ecology, engineering, geology, remote sensing, water resources, meteorology, atmospheric sciences, geochemistry, and geophysics. XAI applications focused primarily on understanding and predicting anthropogenic changes in geospatial patterns and impacts on human society and natural resources, especially biological species distributions, vegetation, air quality, transportation, and climate-water related topics, including risk and management. Among XAI methods, the SHAP and Shapley methods were the most popular (135 articles), followed by feature importance (27), partial dependence plots (22), LIME (21), and saliency maps (15). Although XAI methods are often argued to increase trust in model predictions, only seven studies (1.2%) addressed trustworthiness as a core research objective. This gap is critical because understanding the relationship between explainability and trust is lacking. While XAI applications continue to grow, they do not necessarily enhance trust. Hence, more studies on how to strengthen trust in AI applications are critically needed. Finally, this review underlines the recommendation of developing a “human-centered” XAI framework that incorporates the distinct views and needs of multiple stakeholder groups to enable trustworthy decision-making.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Scientific Computing and Data ManagementArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen