Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
When the Model Trains You: Induced Belief Revision and Its Implications on Artificial Intelligence Research and Patient Care — A Case Study on Predicting Obstructive Hydronephrosis in Children
11
Zitationen
10
Autoren
2024
Jahr
Abstract
Exposure to research data and artificial intelligence (AI) model predictions may lead to many sources of bias in clinical decision-making and model evaluation. These include anchoring bias, automation bias, and data leakage. In this case study, we introduce a new source of bias termed "induced belief revision," which we have discovered through our experience developing and testing an AI model to predict obstructive hydronephrosis in children based on their renal ultrasounds. After a silent trial of our hydronephrosis AI model, we observed an unintentional but clinically significant change in practice — characterized by a reduction in nuclear scans from 80 to 58% (P=0.005). This phenomenon occurred in the absence of any identifiable changes in clinical workflow, personnel, practice guidelines, or patient characteristics over time. We postulate that repeated exposures to model predictors and their corresponding labels led to a change in clinical decision-making based on a learned intuition of the model's behavior. There are two types of this phenomenon: data-induced and model-induced belief revision. Data-induced belief revision occurs when clinical end-users develop an unconscious stimulus–response bond based on model features and labels from the data itself. In contrast, model-induced belief revision arises from repeated exposures to model inputs and outputs, resulting in clinicians' anticipating model predictions on unseen data. We hypothesize that model-induced belief revision may present further along the AI translational pathway, including the silent and clinical trial phases. Model-induced belief revision may threaten the scientific integrity of AI research, because it can lead to underestimated differences between AI predictions and clinical judgment, which may unconsciously be influenced by this phenomenon. If AI-induced changes in clinical practice are not grounded in evidence, model-induced belief revision may lead to harm to the patient regardless of whether the model is deployed. Strategies to test and mitigate this phenomenon are proposed, emphasizing the importance of maintaining clear boundaries between clinical end-users involved in model development and validation phases. By showing how induced belief revision is likely to have affected care related to obstructive hydronephrosis at our institution, we hope to raise broader awareness of this phenomenon, improving clinical AI model development, deployment, and evaluation in general. By implementing the recommended strategies, researchers and clinical end-users can navigate the potential pitfalls associated with this phenomenon and ensure the validity, reliability, and safety of AI applications in medicine.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.