Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Augmenting Decision Competence in Healthcare Using AI-based Cognitive Models
8
Zitationen
4
Autoren
2020
Jahr
Abstract
In many critical decisions, such as medicine, transparency of the underlying decision process is critical. This extends to decision processes that are supported by artificial intelligence. Rather than using a post-hoc explainability approach from explainable AI research (SHAP or LIME), we develop and test an intrinsically transparent and intuitively interpretable model developed from cognitive science, fast-and-frugal trees, in a comparative analysis with state-of-the-art machine learning models. The resultant decision support can be easily implemented as laminated pocket card, augmenting the decision competence of physicians rather than replacing it.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.336 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.227 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.114 Zit.