Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Integrating Human Factors into Trustworthy AI for Healthcare
0
Zitationen
3
Autoren
2023
Jahr
Abstract
Trustworthy AI (TAI) presents a comprehensive framework that emphasizes critical attributes for AI systems. This framework includes accountability, social and environmental well-being, diversity, non-discrimination, fairness, transparency, privacy and data governance, technical robustness and safety, and human agency and oversight [2]. Moreover, organizational trust is intertwined with trustworthiness, trust and outcomes of any AI system [3]. Within healthcare, AI finds application in diverse functions, including diagnostics, management, and training [1]. For instance, consider Infection Tracking System, utilizing the Access Risk Knowledge (ARK Platform) and applying machine learning and analytic, provides a real-world example in healthcare. However, when organizational trust in these AI systems is lacking, it can give rise to various challenges [1]. This system is the primary focus of our study due to its practical relevance. The core objective of this study is to establish a framework for comprehending the establishment and quantification of organizational trust in the utilization of AI systems within a healthcare context. The approach of employing the CUBE socio-technical system analysis (STSA), will be applied to understand organizational trust in AI systems in the St. James Hospital of Ireland. To gather data, questionnaires and interviews will be applied. Analyzing this System enables understanding of organizational trust, how the system affects staff output along with ways to improve organizational trust. Expected Impact: This study is expected to develop methods and metrics that will be used to measure organizational trust in a hospital. This will thereby enhance the ability of hospital staff members to adopt and use AI systems and for future TAI systems development.ACKNOWLEDGMENTS This work was conducted with the financial support of the Science Foundation Ireland Centre for Research Training in Digitally-Enhanced Reality (d-real) under Grant No. 18/CRT/6224 and the ADAPT Centre for Digital Content Technology which is funded under the SFI Research Centers Program (Grant 13/RC/2106 2) and is co-funded under the European Regional Development Fund. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.287 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.140 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.534 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.450 Zit.