Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A risk governance framework for healthcare decision support systems based on socio-technical analysis
0
Zitationen
4
Autoren
2020
Jahr
Abstract
We are developing an Artificial Intelligence (AI) risk governance framework based on human factors and AI governance principles to make automated healthcare decision-support safer and more accountable. Today, the healthcare system is facing a huge overload in reporting, which has made manual processing and comprehensive decision-making impossible. Emerging advances in AI and especially Natural Language Processing seem an attractive answer to human limitations in processing high volumes of reports. However, there are known risks to automation, including the risk in change of deploying AI itself into organisations, emotions, and ethics, which are rarely taken into consideration when making AI-based decisions. To explore this, we will first construct a Decision Support System (DSS) tool based on a knowledge graph extracted from real-world healthcare reports. Then, the tool will be deployed in a controlled manner in a hospital and its operation will be analysed using an established socio-technical methodology developed by the Centre for Innovative Human Systems in Trinity College Dublin over 25 years of research. We will contribute by integrating computer science with organizational psychology and the use of human factors methods to identify the impact of AI-based healthcare DSS, their associated risks, and the ethical and legal challenges. We hypothesize that collaborating with the organisational psychologists to consider the global system of human decision-making and AI-based DSS will help in minimizing the AI-based decision-making risk in a way that ensures fairness, accountability, and transparency. This study will be carried out with our partner hospital, St. James in Dublin.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.287 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.140 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.534 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.450 Zit.