Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI Model as a Complementary Tool to Randomized Controlled Trials (RCTs): A Comprehensive Assessment using Historical COVID Data
0
Zitationen
10
Autoren
2023
Jahr
Abstract
Background and Purpose: Randomized Control Trials (RCTs) are the gold standard for establishing causality in drug efficacy, However, they have limitations due to strict inclusion criteria and complexity. When RCTs are not feasible, researchers turn to observational studies. Explainable AI (XAI) models provide an alternative approach to understanding cause-and-effect relationships. Experimental Approach: : In this study, we utilized an XAI model with a historical COVID-19 dataset to establish the hypothesis of drug efficacy. The datasets consisted of 3,307 COVID-19 patients from a hospital in Delhi, India. Eight XAI models were employed to assess factors influencing COVID-19 mortality. LIME and SHAP interpretability techniques were applied to the best-performing ML model to determine feature importance in outcome. Key Results: The XGBoost ML classifier outperformed (weighted F1 score, MCC, accuracy, ROC-AUC, sensitivity and specificity score of 91.7%, 58.8%, 91.3%, 92.2% 93.8%, and 70.2%, respectively) other models and the SHAP summary plot enabled the identification of significant features that contributes to COVID-19 mortality. These features encompassed comorbidities like renal and cardiac diseases and tuberculosis. Additionally, the XAI models revealed that medications such as enoxaparin, remdesivir, and ivermectin did not exhibit preventive effects on mortality Conclusion and Implications: While XAI models offer valuable insights, they should not replace RCTs as a priority for ensuring the safety and effectiveness of new drugs and treatments. However, XAI models can serve as valuable tools for suggesting future research directions and aiding clinical decision-making, particularly when the efficacy of a drug in a controlled trial is uncertain.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.311 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.