Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Requirements Engineering for AI-Driven Healthcare Solutions: A Case Study Combining Social Innovation and Multistakeholder Engagement (Preprint)
0
Zitationen
19
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> The successful design and implementation of Artificial Intelligence (AI)-driven solutions in healthcare requires early and continuous multidisciplinary and multiprofessional collaboration. However, diverse disciplinary educational backgrounds, varying languages, and cultural or geographic differences can lead to misunderstandings. To bridge this gap, a structured approach to AI requirements specification can facilitate a shared terminology and a deep mutual understanding among stakeholders, serving both as a guide for technological development and as a means of defining clear pathways for clinical implementation. While technical requirements are well-established in traditional technology development domains, this structured approach remains relatively underutilised within clinical and social science contexts. Consequently, valuable insights derived from participatory and stakeholder-driven approaches are often overlooked, limiting the relevance and trustworthiness of AI systems in healthcare settings. </sec> <sec> <title>OBJECTIVE</title> This study presents a methodology for requirements gathering, specification, mapping, and verification, specifically engineered for the complex, multi-stakeholder environment of clinically-applied AI. The methodology was implemented within the specific case of an international multidisciplinary project evaluating an AI-based prediction tool for neoadjuvant chemotherapy treatment response for breast cancer and forming a part of the developed AI validation framework. </sec> <sec> <title>METHODS</title> The process for AI requirements gathering, specification, and monitoring included three iterative rounds of discussion, engaging nearly 150 social, clinical, technical, ethical and regulatory experts, and patients, across Europe, South America, North Africa, and Eurasia. It combines established requirements engineering methods (including the MoSCoW framework) with social innovation techniques to ensure inclusivity and contextual relevance. </sec> <sec> <title>RESULTS</title> A key finding is the successful development of a structured framework that systematically integrates technical feasibility with critical clinical, ethical, and regulatory constraints. It is supplemented with an extensive list of 184 actionable consensus-based requirements, categorised by stakeholder group, providing valuable insights for AI researchers in the oncology field with the potential to be transferable to other digital health domains. The requirements align with the FUTURE-AI framework, ensuring the tool is trustworthy and comprehensive from a multi-stakeholder perspective, ensuring comprehensive consideration of fairness, usability, transparency, universality, robustness, and explainability. </sec> <sec> <title>CONCLUSIONS</title> The proposed methodology represents a significant advancement for requirements engineering in digital health by extending traditional technical processes to systematically incorporate non-technical requirements from diverse global stakeholders. This unified approach is essential for ensuring AI solutions are not only technically robust but also clinically relevant, legally compliant, and socially acceptable. </sec> <sec> <title>CLINICALTRIAL</title> Not applicable </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.