Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Developing a Framework for Trustworthy AI-Supported Knowledge Management in the Governance of Risk and Change
5
Zitationen
12
Autoren
2022
Jahr
Abstract
This paper proposes a framework for developing a trustworthy artificial intelligence (AI) supported knowledge management system (KMS) by integrating existing approaches to trustworthy AI, trust in data, and trust in organisations. We argue that improvement in three core dimensions (data governance, validation of evidence, and reciprocal obligation to act) will lead to the development of trust in the three domains of the data, the AI technology, and the organisation. The framework was informed by a case study implementing the Access-Risk-Knowledge (ARK) platform for mindful risk governance across three collaborating healthcare organisations. Subsequently, the framework was applied within each organisation with the aim of measuring trust to this point and generating objectives for future ARK platform development. The resulting discussion of ARK and the framework has implications for the development of KMSs, the development of trustworthy AI, and the management of risk and change in complex socio-technical systems.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.