Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Clinical AI is not (yet) trustworthy-but it could be (Preprint)
0
Zitationen
14
Autoren
2025
Jahr
Abstract
<sec> <title>UNSTRUCTURED</title> The shift toward trustworthy artificial intelligence (AI) in healthcare marks a pivotal transformation. Traditionally, clinical AI systems have lacked dynamic trust integration across their lifecycle. With structured governance frameworks, AI in healthcare is evolving—ushering in a new era of trust-enabling technologies. In this Viewpoint, we present a framework grounded in the Assessment List for Trustworthy Artificial Intelligence (ALTAI) and applied within the Horizon Europe AI-PROGNOSIS project to embed ethical, technical, and regulatory safeguards across the AI lifecycle. By surfacing implementation tensions and integrating normative, technical, and regulatory safeguards, we outline a replicable path for building adaptive, trust-enabling infrastructures in clinical practice, demonstrating that while clinical AI is not yet trustworthy, structured, lifecycle-oriented governance makes it possible. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.