OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 06:08

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

SP2.13 Machine Learning in Clinical Decision Making in Emergency General Surgery: A Qualitative Study of the Acceptability of Artificial Intelligence (MAIDEN-Q)

2025·0 Zitationen·British journal of surgery
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2025

Jahr

Abstract

Abstract Introduction Artificial intelligence (AI) and machine learning (ML) are developing areas in healthcare. This study (MAIDEN-Q) explores patient and healthcare professional (HCP) perspectives on potential development and uses of AI for clinical decision-making in emergency general surgery. Methods MAIDEN-Q is a qualitative phenomenological study. Adult patients who have recently undergone emergency abdominal surgery were purposefully sampled. Interviews were recorded, transcribed and thematically analysed. Results Four major themes emerged from 15 patient and 9 HCP interviews.AI in healthcare: Patients and HCPs accepted AI being used with most seeing the integration of AI as positive and inevitable.Use of AI in clinical decision-making: Patients and HCPs support the role of AI as an adjunct to diagnostic and treatment decision-making processes. Patients expressed reservations about how subjective clinical information, such as pain, could be used accurately.Training ML models: Using healthcare records to train ML models was deemed ethical, but some patients felt access should be limited to non-intimate data. Patients were explicit about not wanting their healthcare data being given or sold to commercial entities.Clinical and Research Governance: Patients felt ML model development should be regulated. Responsibility for ‘wrong decisions’ lies with the clinician or manufacturer of the AI. Discussion Implementation of AI in healthcare is generally supported. There should be a cautious and transparent approach to the acquisition and use of the healthcare data required to train ML models where explicit individual consent has not been sought to avoid undermining public trust or patient-doctor relationships.

Ähnliche Arbeiten