Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence and computer-aided diagnosis in diagnostic decisions: 5 questions for medical informatics and human-computer interface research
3
Zitationen
3
Autoren
2025
Jahr
Abstract
Effective AI integration requires human-centered and adaptive design. Five central research questions address: (1) what type and format of information AI should provide; (2) when information should be presented; (3) how explainable AI affects diagnostic decisions; (4) how AI influences automation bias and complacency; and (5) the risks of skill decay due to reliance on AI. Each question underscores the importance of balancing efficiency, accuracy, and clinician expertise while mitigating bias and skill degradation. AI holds promise for improving diagnostic accuracy and efficiency, but realizing its potential requires post-deployment evaluation, equitable access, clinician oversight, and targeted training. AI must complement, rather than replace, human expertise, ensuring safe, effective, and sustainable integration into diagnostic decision-making. Addressing these challenges proactively can maximize AI's potential across healthcare and other high-stakes domains.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.