OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 22.03.2026, 05:02

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Clinicians risk becoming “liability sinks” for artificial intelligence

2024·30 Zitationen·Future Healthcare JournalOpen Access
Volltext beim Verlag öffnen

30

Zitationen

10

Autoren

2024

Jahr

Abstract

The problemArtificial Intelligence (AI) is often touted as healthcare's saviour, but its potential will only be realised if developers and providers consider the whole clinical context and AI's place within it.One of many aspects of that clinical context is the question of liability.Analysis of responsibility attributions in complex, partly automated socio-technical systems has identified the risk that the nearest human operator may bear the brunt of responsibility for overall system malfunctions. 1As we move towards integrating AI into healthcare systems, it is important to ensure that this does not translate into clinicians unfairly absorbing legal liability for errors and adverse outcomes over which they have limited control.In the current, standard model of AI-supported decision-making in healthcare, electronic data is fed into an algorithm, typically a machinelearnt model, which integrates the acquired information to arrive at a recommendation which is output to a human clinician.The clinician can consider this recommendation alongside information from other sources, including examination of and discussion with the patient, and either accept the recommendation as-is, or replace it with a decision they make themselves ( Fig. 1 ).For example, in a system recommending treatment for diabetes, the system may recommend -based on coded electronic data -that it is appropriate to start insulin; though after considering patient context and wishes the clinician may choose to override this.Due to differences in regulatory approval processes, the positioning of such systems as clinical support rather than diagnostic makes them cheaper and quicker to get to market.Additionally, given recent guidance from the National Health Service in England, which clarifies that the final decision must be taken by a healthcare professional, 2 this model looks set to become the norm across the UK healthcare system.But the standard model may have a negative impact on the clinician, who must choose between accepting the AI recommendation, or substituting their own decision -which, despite probably being AI-influenced, involves largely reverting to a traditional (non-AI) approach.They risk

Ähnliche Arbeiten