Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Designing for Physician Trust: Toward a Machine Learning Decision Aid for Radiation Toxicity Risk
10
Zitationen
3
Autoren
2019
Jahr
Abstract
The application of machine learning (ML) technologies in health care is expected to improve care delivery and patient outcomes. However, there are no best practices for designing these technologies for use in clinical settings. To explore user needs and design requirements for a user interface of a ML risk prediction tool in development, we consulted with subject matter experts and physicians. We explored physician expectations of using a ML tool in clinical practice and their preferences on designs. Our process revealed physician perspectives on trusting a ML tool and opportunities to design for these considerations, while navigating ambiguity in the tool’s outputs.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.493 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.377 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.555 Zit.