Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Interpretable machine learning models for beta thalassemia prediction: an explainable AI approach for smart healthcare 5.0
0
Zitationen
6
Autoren
2026
Jahr
Abstract
Furthermore ensuring the models transparency and interpretability, the proposed method integrates SHapley Ad-ditive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), enabling both global and local interpretability of model predictions. SHAP gives us insight into important features at the global level, while LIME explains individual predictions, making the model's decisions more comprehensible for clinical applications.
Ähnliche Arbeiten
Automatic Recording Apparatus for Use in Chromatography of Amino Acids
1958 · 9.602 Zit.
Enzymatic Amplification of β-Globin Genomic Sequences and Restriction Site Analysis for Diagnosis of Sickle Cell Anemia
1985 · 8.996 Zit.
Estimation of total, protein-bound, and nonprotein sulfhydryl groups in tissue with Ellman's reagent
1968 · 7.950 Zit.
Hepcidin Regulates Cellular Iron Efflux by Binding to Ferroportin and Inducing Its Internalization
2004 · 4.722 Zit.
A novel MHC class I–like gene is mutated in patients with hereditary haemochromatosis
1996 · 3.705 Zit.