Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
SP2.13 Machine Learning in Clinical Decision Making in Emergency General Surgery: A Qualitative Study of the Acceptability of Artificial Intelligence (MAIDEN-Q)
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Abstract Introduction Artificial intelligence (AI) and machine learning (ML) are developing areas in healthcare. This study (MAIDEN-Q) explores patient and healthcare professional (HCP) perspectives on potential development and uses of AI for clinical decision-making in emergency general surgery. Methods MAIDEN-Q is a qualitative phenomenological study. Adult patients who have recently undergone emergency abdominal surgery were purposefully sampled. Interviews were recorded, transcribed and thematically analysed. Results Four major themes emerged from 15 patient and 9 HCP interviews.AI in healthcare: Patients and HCPs accepted AI being used with most seeing the integration of AI as positive and inevitable.Use of AI in clinical decision-making: Patients and HCPs support the role of AI as an adjunct to diagnostic and treatment decision-making processes. Patients expressed reservations about how subjective clinical information, such as pain, could be used accurately.Training ML models: Using healthcare records to train ML models was deemed ethical, but some patients felt access should be limited to non-intimate data. Patients were explicit about not wanting their healthcare data being given or sold to commercial entities.Clinical and Research Governance: Patients felt ML model development should be regulated. Responsibility for ‘wrong decisions’ lies with the clinician or manufacturer of the AI. Discussion Implementation of AI in healthcare is generally supported. There should be a cautious and transparent approach to the acquisition and use of the healthcare data required to train ML models where explicit individual consent has not been sought to avoid undermining public trust or patient-doctor relationships.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.