OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 06.04.2026, 02:46

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

TRANSFORMING BLACK BOX MODELS INTO TRANSPARANT SYSTEMS THROUGH EXPLAINABLE AI METHODS

2025·0 Zitationen·QP-AIDSEOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2025

Jahr

Abstract

The fast integration of AI in key industries like healthcare, banking, and autonomous cars has led to a growing number of people looking for ways to understand and assure accountability for machine learning models. Users have a hard time accepting, trusting, and collaborating with black box models (e.g., deep neural networks and ensemble approaches) since they don't show how they make decisions, even when these models are good at producing predictions. There is a novel way to bridge this gap with explainable AI (XAI) solutions, which reduce complicated systems to more understandable and observable forms. This research delves into many XAI approaches, including tools for data visualization, models that are universally understandable, and model-agnostic methods like SHAP and LIME. The superior knowledge of feature importance, causal links, and decision pathways that XAI possesses allows for more fair algorithmic decision-making, more trustworthy results, and easier debugging. Then, it moves on to discuss topics like consistency, scalability, and the danger of oversimplification. Finding a middle ground between being clear and being honest is crucial. Explainable AI (XAI) is used to transform "black box" models into transparent systems. This lays the groundwork for the ethical deployment of AI in major real-world settings and allows humans and AI to collaborate.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationMachine Learning in Healthcare
Volltext beim Verlag öffnen