Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Causal Reasoning as a Path to Explainable and Generalizable Artificial Intelligence
0
Zitationen
1
Autoren
2024
Jahr
Abstract
Artificial Intelligence (AI) systems, particularly those based on deep learning, have achieved extraordinary success in pattern recognition and predictive tasks. However, their reliance on correlation-based learning has raised serious concerns regarding explainability, robustness, fairness, and generalization. These limitations are especially problematic in high-stakes domains such as healthcare, autonomous systems, finance, and governance, where AI decisions must be transparent, reliable, and adaptable to changing environments. Causal reasoning offers a promising paradigm to address these challenges by enabling AI systems to move beyond surface-level correlations toward an understanding of underlying cause–effect relationships. This paper explores causal reasoning as a foundational pathway to explainable and generalizable artificial intelligence. It examines the theoretical underpinnings of causal inference, contrasts causal and correlational learning, and analyzes how causal models enhance explainability and out-of-distribution generalization. The paper further reviews emerging approaches for integrating causal reasoning into modern AI systems, including structural causal models, counterfactual learning, invariant representations, and hybrid neuro-symbolic architectures. Key applications, challenges, and future research directions are discussed. The study argues that causal reasoning is not merely an auxiliary feature but a necessary component for building trustworthy, human-aligned, and generalizable AI systems.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.311 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.