Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Clinicians risk becoming “liability sinks” for artificial intelligence
30
Zitationen
10
Autoren
2024
Jahr
Abstract
The problemArtificial Intelligence (AI) is often touted as healthcare's saviour, but its potential will only be realised if developers and providers consider the whole clinical context and AI's place within it.One of many aspects of that clinical context is the question of liability.Analysis of responsibility attributions in complex, partly automated socio-technical systems has identified the risk that the nearest human operator may bear the brunt of responsibility for overall system malfunctions. 1As we move towards integrating AI into healthcare systems, it is important to ensure that this does not translate into clinicians unfairly absorbing legal liability for errors and adverse outcomes over which they have limited control.In the current, standard model of AI-supported decision-making in healthcare, electronic data is fed into an algorithm, typically a machinelearnt model, which integrates the acquired information to arrive at a recommendation which is output to a human clinician.The clinician can consider this recommendation alongside information from other sources, including examination of and discussion with the patient, and either accept the recommendation as-is, or replace it with a decision they make themselves ( Fig. 1 ).For example, in a system recommending treatment for diabetes, the system may recommend -based on coded electronic data -that it is appropriate to start insulin; though after considering patient context and wishes the clinician may choose to override this.Due to differences in regulatory approval processes, the positioning of such systems as clinical support rather than diagnostic makes them cheaper and quicker to get to market.Additionally, given recent guidance from the National Health Service in England, which clarifies that the final decision must be taken by a healthcare professional, 2 this model looks set to become the norm across the UK healthcare system.But the standard model may have a negative impact on the clinician, who must choose between accepting the AI recommendation, or substituting their own decision -which, despite probably being AI-influenced, involves largely reverting to a traditional (non-AI) approach.They risk
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.287 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.140 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.534 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.450 Zit.