Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Assessing trust and acceptance of an AI workflow assistant.
0
Zitationen
5
Autoren
2025
Jahr
Abstract
488 Background: To support investment and use, provider organizations and staff must trust AI applications. However, experience measuring trust is minimal. To build experience assessing trust in AI, we administered a validated survey measuring trust and acceptance of a general-purpose AI workflow assistant at a cancer center. Methods: At Memorial Sloan Kettering Cancer Center (MSKCC) in New York City, Microsoft Copilot was made available to a subset of clinical and administrative staff in 2024 to help determine whether the application should be expanded to the entire organization. We adapted the TrAAIT (Trust and Acceptance of Artificial Intelligence Technology) survey with 11 Likert questions (1-5 scale, with 3 considered acceptable) assessing overall trust (comprised of information credibility, system performance and application value) and acceptance of Copilot. We administered the survey in May 2025. Results: Among 322 clinicians and staff users, 137 (42.5%) responded to the survey. 128 (93.4%) had used Copilot for more than 4 months. Primary uses for the application were meeting summarization, slide deck creation, and e-mail drafting. The overall level of trust was 3.74 (all trust measures were above 3.5); acceptance was 4.28. Conclusions: Respondents had a high to very high level of trust and acceptance in Copilot, suggesting that the application would be broadly accepted and used at our institution. Future work should correlate levels of trust and acceptance with actual use. Tools assessing trust in AI can be incorporated into procurement to help organizations decide which AI technologies to invest in. In our case all trust measures were high; however, organizations can use low scores to identify areas for improvement. This study was carried out on an administrative application; the same methodology could apply to clinical AI applications (e.g. prediction models, generative AI). Trust Measure Contributing Elements Score Overall Trust Aggregate of next 3 rows 3.74 --Information Credibility Information timeliness, accuracy, quality, satisfaction 3.71 --System Performance System reliability, adaptability 3.51 --Application Value Ease of use, usefulness 3.99 Acceptance Willingness, likeliness of use 4.28
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.