Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Research Exploration of Artificial Intelligence: The Black Box
0
Zitationen
4
Autoren
2025
Jahr
Abstract
The fact that the explosion of Artificial Intelligence (AI) has given way to the popularity of complex models, particularly deep learning models, which are more often than not opaque black boxes. This paper is a critical discussion of issues arising with these non-transparent systems, especially on ethics in all its applications, trust and regulatory concerns about high-stakes systems like healthcare, finance and criminal justice. We shall provide a critical assessment of the most prominent explainable AI (XAI) techniques such as LIME, SHAP, and counterfactual reasoning and compare the extent to which they succeeded in making complex models interpretable. Also, we evaluate contextual risks linked to different application areas and stress the difference between interpretability requirements in different industry sectors. To conclude, the paper offers constructive guidelines to future AI research, as well as a call to support interpretable-by-default models, comparable interpretability measures, and a more in-depth consideration of ethics and law. Based on this, the paper would contribute to good AI practices that are based on accuracy, transparency, and building trust in multiple fields of different disciplines.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.379 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.246 Zit.
"Why Should I Trust You?"
2016 · 14.271 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.128 Zit.