Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence in Rare Disease Diagnosis: A Clinical Milestone with Ethical Considerations
0
Zitationen
5
Autoren
2025
Jahr
Abstract
With great promise to transform the rare disease diagnosis scene, artificial intelligence (AI) has emerged. Using huge databases from genomics, imaging, and electronic health records, artificial intelligence systems can identify patterns that even experienced doctors often overlook (1).For those with orphaned diseases, who sometimes have to bear years of uncertainty, this capacity to shorten the diagnostic journey is quite crucial. But while AI has advantages, its abrupt involvement in diagnostic procedures presents major ethical and clinical problems necessitating more investigation. Most artificial intelligence algorithms' interpretability poses a major difficulty. Usually black boxes, deep learning algorithms provide diagnostic results without a clear justification. This hurts openness and may compromise the mutual decision-making process between doctor and patient (2). In medicine, especially when rare diseases are being diagnosed, where patient involvement and customized treatment are vital, trust in medical decision-making grounds decision-making. Bias in the training data for artificial intelligence is another critical problem.Learned from data from high-income nations, the majority of algorithms do not take under resourced populations into consideration (3). This could compromise diagnostic accuracy in ethnically and geographically varied populations and so worsen existing health inequities. For instance, in underrepresented communities (4) using recent artificial intelligence models applied to the diagnosis of genetic disease, performance gaps were observed. Furthermore worrisome is the possibility that doctors will increasingly rely too much on artificial intelligence systems. A 2024 multicenter simulation study showed how clinicians can become too dependent on AI output, even when it goes against clinical intuition (5).Particularly in rare disease environments when subtlety and world understanding are essential, this might cause loss of critical thinking ability and aid in unfavorable patient outcomes. Though AI has the potential to revolutionize the identification of rare diseases, its application in clinical practice must be consistent with the tenets of openness, equity, and ethical responsibility. Independent real-world practice testing, inclusive training datasets, and ongoing clinician education are required to make sure that technical developments result in actual patient benefit.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.