Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Human Factors in Data-Driven Healthcare
0
Zitationen
1
Autoren
2023
Jahr
Abstract
Artificial Intelligence (AI) has fueled advances in many fields including healthcare, busi- ness and engineering. However, AI methods such as deep learning treat decision making as a black box where inputs are converted into outputs using a large number of parameter- ized features, with the reasoning behind decisions being largely opaque. It is difficult for humans to trust decisions when they don’t know the reasoning behind them. The goal of human-centered AI is to build reliable and safe AI systems. Human-in-the-Loop (HITL) systems build on earlier human factors approaches to complex aviation and nuclear plant interfaces, where trust is increased by integrating human supervision and expertise into the automation/AI system. Human factors engineering seeks to reduce human error, increase productivity, and enhance safety and comfort with a specific focus on the interaction be- tween the human and the automation/AI. In the research reported in this dissertation I merged human-computer interaction approaches to user experience design with human fac- tors approaches. In the research reported in this dissertation I show how human factors can be applied in the field of data-driven healthcare. I present empirical findings concerning the value of human experts in improving machine learning clinical prediction based on medi- cal data. I also report on the design and development of tools and approaches for making machine learning prediction models explainable and usable in the context of data-driven healthcare. Since highly skilled data scientists and machine learning experts will always be scarce relative to the ever increasing needs to use large data sets to improve decision mak- ing, the present research points the way towards human factors tools/systems that can allow users (such as physicians) without strong machine learning or data mining backgrounds, to use AI-based clinical decision support systems that they can trust.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.