Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Ethical, legal, and social assessment of AI-based technologies for prevention and diagnosis of rare diseases in health technology assessment process
0
Zitationen
11
Autoren
2025
Jahr
Abstract
Abstract: Background: While the HTA community appears well-equipped to assess preventive and diagnostic technologies, certain limitations persist in evaluating technologies designed for rare diseases, including those based on Artificial Intelligence (AI). In Europe, the EUnetHTA Core Model® serves as a reference for assessing preventive and diagnostic technologies. This study aims to identify key ethical, legal, and social issues related to AI-based technologies for the prevention and diagnosis of rare diseases, proposing enhancements to the Core Model. Methods: An exploratory sequential mixed methods approach was used, integrating a PICO-guided literature review and a focus group. The review analyzed six peer-reviewed articles and compared the findings with a prior study on childhood melanoma published in this journal (Healthcare), retaining only newly identified issues. A focus group composed of experts in ethical, legal, and social domains provided qualitative insights. Results: Thirteen additional issues and their corresponding questions were identified. Ethical concerns related to rare diseases included insufficient disease history knowledge, lack of robust clinical data, absence of validated efficacy tools, overdiagnosis/underdiagnosis risks, and unknown ICER thresholds. Defensive medicine was identified as a legal issue. For AI-based technologies, concerns included discriminatory outcomes, explicability, and environmental impact (ethical); accountability and reimbursement (legal); and patient involvement and job losses (social). Conclusions: Integrating these findings into the Core Model enables a comprehensive HTA of AI-based rare disease technologies. Beyond the Core Model, these issues may inform broader assessment frameworks, ensuring rigorous and ethically responsible evaluations.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.