Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Impact and Prediction of AI Diagnostic Report Interpretation Type on Patient Trust
6
Zitationen
2
Autoren
2023
Jahr
Abstract
With the rapid development of AI technology and the rise of AI in health care, AI diagnostic techniques are gaining attention. Studies have been conducted to enhance the reliability of AI in terms of algorithmic accuracy and "black box" nature, but few studies have explored the impact of AI interpretation type on patient trust. In this paper, we use subjective scales and objective eye-tracking techniques based on the elaboration likelihood model (ELM) and cognitive load theory to explore the trust and prediction of patients with different health literacy on global and partial interpretations of AI diagnostic reports. Firstly, based on the existing AI diagnostic report form, we remove the distracting information and restore the AI diagnostic report display elements by Axure RP9, and construct the patient health literacy and patient trust evaluation scales using the questionnaire method; then we conduct scenario simulation experiments using eye-tracking technology to analyze and compare the patient trust perception and objective eye-movement measurement results; finally, we use Pearson correlation test. Partial least squares method was used to construct a relationship model between patient trust and eye movement index, and the validity of the model was verified. The results showed that patients with different health literacy differed in their trust in different AI interpretation types; patients with different health literacy differed in their gaze levels for different interpretation types of diagnostic reports; and the relationship model between patient trust and eye movement indicators could effectively predict patient perceived trust. The results of the study complement the research on the calibration trust of eye-tracking technology in the medical field, while providing a reliable scientific basis for the design and developers of intelligent diagnostic technology applications.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.