OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.03.2026, 05:32

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Process Quality Assurance of Artificial Intelligence in Medical Diagnosis

2024·3 Zitationen
Volltext beim Verlag öffnen

3

Zitationen

7

Autoren

2024

Jahr

Abstract

While artificial Intelligence (AI) has shown promising results in the healthcare field, it is undeniable that there are risks associated with AI in healthcare that society must acknowledge. This paper presents a comprehensive systems modeling framework aimed at evaluating trust and being responsive to reasons for mistrust and distrust in AI-assisted medical diagnosis, with a specific focus on the diagnosis of cardiac sarcoidosis, utilizing Explainable Artificial Intelligence (XAI) techniques. The design includes two primary sections: 1. Identifying the most and least disruptive scenarios to the system, as well as the most important initiatives for the system. 2. Utilizing XAI techniques such as SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and Anchors models to provide explanations on how machine learning models justify their outcomes. The findings indicate the significance of employing explainable AI in critical domains like healthcare, where lives of patients are at risk. XAI can be employed to analyze the outcomes for AI users, determining the significance of features, improving comprehension of AI outputs, enhancing transparency, explainability, and interpretability of AI outputs, and facilitating data assessment.

Ähnliche Arbeiten