Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
When AI Eats the Healthcare World - Is Trusting AI Fed, or Earned? (Preprint)
0
Zitationen
3
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> Perception-based studies are susceptible to bias introduced through the design of the instruments used. We demonstrate the need to shift from perception-based to usage-based trust evaluation, emphasizing that trust must be earned through demonstrated reliability rather than assumed from pre-adoption surveys. Our findings suggest that successful AI implementation requires a proactive approach that addresses the complex interplay of human, technical, and organizational factors, grounded in real-world usage data rather than theoretical, perception-driven acceptance measures. </sec> <sec> <title>OBJECTIVE</title> To examine the disconnect between pre-adoption expectations and post-implementation realities of AI in healthcare systems. </sec> <sec> <title>METHODS</title> We assessed the key perceptive-driven models, namely the Unified Theory of Acceptance and Use of Technology (UTAUT), the Technology Acceptance Model (TAM), and Diffusion of Innovation (DOI) with regards to pre-adoption of AI in healthcare. We then matched the expectations from studies using these pre-adoption models and the real results using post-usage evidences. </sec> <sec> <title>RESULTS</title> Through empirical and anecdotal evidence, this paper demonstrates a disconnect between perception-driven technology adoption frameworks and real-world usage, focusing on the human factors that influence AI adoption and shortcomings in current perception-focused trust research. </sec> <sec> <title>CONCLUSIONS</title> Real-world usage demonstrates that hype and pre-adoption expectations fall short, and underly the reluctance or resistance of healthcare providers to fully adopt AI. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.527 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.419 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.909 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.578 Zit.