Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
PREFER-IT: A transdisciplinary co-created framework to realise inclusive medical AI
0
Zitationen
36
Autoren
2025
Jahr
Abstract
Abstract Artificial intelligence (AI) in healthcare holds transformative potential but risks exacerbating existing health disparities if inclusivity is not explicitly accounted for. This study addresses the disconnected discussions on inclusive medical AI by developing a comprehensive framework, PREFER-IT. This framework is based on the outcomes of a five-day transdisciplinary co-creation workshop that involved 37 experts from diverse backgrounds, including healthcare, ethics, law, social sciences, AI, and patient advocacy. For this workshop, we used design thinking and participatory methodologies to develop a framework for realising inclusive medical AI. We identified three key challenges for realising inclusive medical AI: integrating the lived experiences and stakeholder voices across the AI lifecycle, designing data collection practices that promote fairness and prevent inequalities, and fostering regulatory frameworks to uphold human rights and promote inclusivity. The analysis of participants’ perspectives informed the development of eight key thematic clusters of PREFER-IT: Participatory and co-design approaches (P), Representative and diverse data (R), Education and digital literacy (E), Fairness (F), Ethical and legal accountability (E), Real-world validation and feedback (R), Inclusive communication (I), and Technical interoperability (T). These elements were mapped across structural layers of AI (humans, data, system, process, and governance) and the AI lifecycle to guide inclusive design, development, validation, implementation, monitoring, and governance. This framework fosters stakeholder engagement and systemic change, positioning inclusion as a guiding principle in practice. PREFER-IT offers a practical and conceptual contribution for how to include ethical, legal and societal aspects when aiming to foster responsible and inclusive AI in healthcare. Author Summary Artificial intelligence (AI) is being used more and more in healthcare to improve diagnosis, treatment, and personalised care. However, if not designed carefully, these technologies can unintentionally increase existing inequalities and exclude certain groups from their benefits. In our study, we brought together experts from healthcare, ethics, law, social sciences, and patient advocacy to explore how AI in medicine can be made more inclusive. Over five days, we worked together to identify key issues and come up with practical solutions. We focused on three main areas: 1) Ensuring diverse voices are heard during the development of AI tools; 2) Making data collection fair and representative; and 3) Creating regulations that protect human rights. From the discussions of the workshop, we created the PREFER-IT framework, which outlines eight key principles for inclusive AI: P articipatory and co-design approaches R epresentative and diverse data E ducation and digital literacy F airness E thical and legal accountability R eal-world validation and feedback I nclusive communication T echnical interoperability This framework helps guide developers, policymakers, and healthcare professionals in creating AI systems that are not only effective but also fair and respectful of all users. Our work emphasises the importance of involving patients and communities in shaping the future of AI.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.
Autoren
- Patrícia Pita Ferreira
- Sara Soriano Longarón
- Wiam Bouisaghouane
- Jetse Goris
- Anne H. Hoekman
- Balázs Markos
- Benjamin Maus
- Giorgia Pozzi
- H. Hasan
- Indrė Kalinauskaitė
- Jonáh Stunt
- Joosje D. Kist
- Judith van der Elst
- Katell Maguet
- Liv Ziegfeld
- Maarten Cuypers
- Megan Milota
- Michelle Habets
- Sara Colombo
- Špela Petrič
- Steff Groefsema
- Steven Warmelink
- Elja Daae
- Giovanni Briganti
- Ildikó Vajda
- Matías Valdenegro-Toro
- Matthias Braun
- Pieter Jeekel
- Suseth Goosen
- Alex Schepel
- Laxmie Ester
- Riane Kuzee
- Sophie de Klerk
- Claudine J. C. Lamoth
- Lisa Ballard
- Mirjam Plantinga
Institutionen
- University Medical Center Groningen(NL)
- University of Groningen(NL)
- Malmö University(SE)
- Delft University of Technology(NL)
- Leidsche Rijn Julius Health Centers(NL)
- Leiden University Medical Center(NL)
- Erasmus University Rotterdam(NL)
- Erasmus MC(NL)
- Hanze University of Applied Sciences(NL)
- Oldham Council(GB)
- Human Factors (Norway)(NO)
- Rathenau Instituut(NL)
- Ministry of the Interior and Kingdom Relations(NL)
- University of Mons(BE)
- Gender Studies(CZ)
- University of Bonn(DE)
- UK Coal(GB)
- Fondation Université Française en Arménie(AM)
- Stadtwerke Köln (Germany)(DE)
- University of Southampton(GB)