Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Bounded Rationality in AI-Assisted Medical Decision-Making
0
Zitationen
4
Autoren
2025
Jahr
Abstract
<title>Abstract</title> Recent advances in generative AI models enabled the creation of digital health assistants for patients. However, it remains unclear how patients - especially in the presence of cognitive biases - would utilize them. Drawing on behavioral decision theory (BDT), we analyzed how bounded rational patients use AI health assistants to make healthcare choices. Our findings show that cognitive biases lead patients to underutilize these assistants, limiting their potential to prompt high-risk patients to seek necessary care and to reduce unnecessary clinical visits among low-risk patients. Moreover, we found that bounded rational patients become less sensitive to differences in risk, and their decision to seek clinical care is determined primarily by the cost of access to healthcare rather than by the underlying health risk. These findings highlight the need for developers to design bias-mitigating interfaces and general transparency in the model, and for policymakers to establish safeguards to support effective adoption of these technologies.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.380 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.243 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.671 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.496 Zit.