Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Public perceptions of health data sharing for artificial intelligence research: a qualitative focus group study in the UK
0
Zitationen
8
Autoren
2026
Jahr
Abstract
Objective Artificial intelligence (AI) in healthcare offers potential to improve diagnosis, patient care and system efficiency. However, AI development, evaluation and post-implementation monitoring require large volumes of health data, and public trust is essential for enabling such data sharing. This study aimed to explore UK public perceptions of health data sharing for AI research, and to identify factors influencing willingness to participate. Methods and analysis We conducted eight 90-min online focus groups in May–July 2024, with 41 purposively sampled adult (18 years or older) members of the UK public, recruited via a national registry and departmental social media. Groups were selected to maximise diversity in age, ethnicity, household income, education, health status and geography. We used thematic analysis to iteratively develop themes and subthemes inductively. Results Three key themes were developed: (1) perceived general risks of health data sharing, including concerns about the limits of anonymisation, the sensitivity and scope of data requested, data governance and security, and trust in different data custodians. Participants viewed anonymisation as essential but fallible, especially for rare conditions or large linked datasets, and held a pragmatic view on sharing data with commercial organisations; (2) individual risk-benefit assessment, reflecting how people weighed potential personal harms, such as discrimination, data misuse and risks for children, against anticipated benefits, including altruism, improved care and the perceived clinical value of AI; (3) informed consent as a foundation for trust, encompassing preferences for clear, study-specific, tailored information about data use and AI purposes, and for consent processes that provided choice, avoided emotional pressure and allowed time for reflection or later withdrawal. Conclusion Trust in health-data sharing for AI is conditional and shaped by participant-level and study-specific risks and benefits. These findings highlight public expectation for transparent governance, clear justification for data use and public benefit, particularly where commercial involvement is proposed. In the context of expanding healthcare digital infrastructure and emerging UK/European Union regulatory frameworks for AI governance and reporting, understanding these expectations will be essential to building and sustaining the social licence required for large-scale data use, supporting equitable participation and representation in AI research.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.