Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Ethical framework for responsible foundational models in medical imaging
8
Zitationen
36
Autoren
2025
Jahr
Abstract
The emergence of foundational models represents a paradigm shift in medical imaging, offering extraordinary capabilities in disease detection, diagnosis, and treatment planning. These large-scale artificial intelligence systems, trained on extensive multimodal and multi-center datasets, demonstrate remarkable versatility across diverse medical applications. However, their integration into clinical practice presents complex ethical challenges that extend beyond technical performance metrics. This study examines the critical ethical considerations at the intersection of healthcare and artificial intelligence. Patient data privacy remains a fundamental concern, particularly given these models' requirement for extensive training data and their potential to inadvertently memorize sensitive information. Algorithmic bias poses a significant challenge in healthcare, as historical disparities in medical data collection may perpetuate or exacerbate existing healthcare inequities across demographic groups. The complexity of foundational models presents significant challenges regarding transparency and explainability in medical decision-making. We propose a comprehensive ethical framework that addresses these challenges while promoting responsible innovation. This framework emphasizes robust privacy safeguards, systematic bias detection and mitigation strategies, and mechanisms for maintaining meaningful human oversight. By establishing clear guidelines for development and deployment, we aim to harness the transformative potential of foundational models while preserving the fundamental principles of medical ethics and patient-centered care.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.
Autoren
- Debesh Jha
- Görkem Durak
- Abhijit Das
- Jasmer Sanjotra
- Onkar Susladkar
- Suramyaa Sarkar
- Ashish Rauniyar
- Nikhil Kumar Tomar
- Linkai Peng
- Sirui Li
- Koushik Biswas
- Emel Aktaş
- Elif Keleş
- Matthew Antalek
- Zheyuan Zhang
- Bin Wang
- Xin Zhu
- Hongyi Pan
- Deniz Seyithanoğlu
- Alpay Medetalibeyoğlu
- Vanshali Sharma
- Vedat Çiçek
- Amir Ali Rahsepar
- Rutger Hendrix
- Ahmet Enis Çetin
- Bulent Aydogan
- Mohamed E. Abazeed
- Frank H. Miller
- Rajesh N. Keswani
- Hatice Savas
- Sachin Jambawalikar
- Daniela P. Ladner
- Amir A. Borhani
- Concetto Spampinato
- Michael B. Wallace
- Ulas Bagci
Institutionen
- Northwestern University(US)
- SINTEF Digital
- SINTEF(NO)
- University of Illinois Chicago(US)
- University of Catania(IT)
- University of Chicago(US)
- Northwestern University(PH)
- Northwestern Medicine(US)
- Columbia University(US)
- City University of New York(US)
- Columbia University Irving Medical Center(US)
- Mayo Clinic in Florida(US)
- Jacksonville College(US)