Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Validation of an Extended Technology Acceptance Model Framework Incorporating Organisational Culture and Trust in AI usage within Hospitals: A Cross-sectional Study
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Introduction: The hospital landscape is rapidly evolving, with Artificial Intelligence (AI) emerging as a central component of both administrative and clinical workflows. The classic Technology Acceptance Model (TAM), however, does not adequately account for the dynamic and complex nature of hospital workflows. Aim: To empirically validate an extended TAM that incorporates Organisational Culture (OC) and Trust in AI (TAI), and to examine how these factors influence healthcare professionals’ perceptions of usefulness, Ease of Use (EOU), behavioural intention, and actual AI usage in hospital settings. Materials and Methods: The present cross-sectional survey was conducted across 5 hospitals in India’s National Capital Region using a 27-item instrument. Data were collected via Google Forms between November 2024 and January 2025 from tertiary and quaternary care hospitals known to have adopted or piloted AI applications, including robotic process automation, virtual assistants, and diagnostic imaging systems. Partial Least Squares-Structural Equation Modelling (PLS-SEM) was employed to assess reliability, validity, model fit, path significance, and mediation effects. Results: The results validated the core TAM along with the proposed extended constructs. Key findings indicated that perceived EOU strongly predicted trust, while TAI directly predicted Actual Use (AU), exerting a stronger effect than behavioural intention. Organisational culture indirectly influenced AI adoption by shaping EOU and trust, fully mediating its effect on behavioural intention. Conclusion: AI adoption follows a mediated pathway in which OC indirectly influences intention to use through EOU, trust, and perceived usefulness, with trust emerging as a critical direct antecedent of actual usage. These findings underscore the practical imperative for healthcare administrators to implement robust AI governance mechanisms to enhance trustworthiness and to foster an innovative organisational culture.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.445 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.325 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.761 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.530 Zit.