Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
CLEAR: An Auditable Foundation Model for Radiology Grounded in Clinical Concepts
0
Zitationen
12
Autoren
2026
Jahr
Abstract
Abstract “Black box” deep learning models for medical image interpretation limit clinical trust and analysis of performance degradation. Here, we introduce Concept-Level Embeddings for Auditable Radiology (CLEAR), an auditable foundation model based on clinical concepts. Trained on over 0.87 million image-report pairs from 239,091 patients, CLEAR learns a visual representation and projects chest X-rays into a semantically rich space defined by large language model embeddings, making every prediction traceable to specific radiological observations. External validation on four large, physician-annotated datasets from the United States, Europe, and Asia shows that CLEAR not only achieves state-of-the-art classification performance but also enables novel applications: auditable zero-shot pathology detection, systematic identification of radiological confounders, and the creation of expert-level concept bottleneck models from data-driven concepts. By integrating clinical knowledge directly into its reasoning process, CLEAR offers a framework for robust model auditing, safer deployment, and enhanced physician-AI collaboration, advancing towards trustworthy medical AI.
Ähnliche Arbeiten
Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study
2020 · 22.607 Zit.
La certeza de lo impredecible: Cultura Educación y Sociedad en tiempos de COVID19
2020 · 19.271 Zit.
A Multi-Modal Distributed Real-Time IoT System for Urban Traffic Control (Invited Paper)
2024 · 14.251 Zit.
UNet++: A Nested U-Net Architecture for Medical Image Segmentation
2018 · 8.479 Zit.
Review of deep learning: concepts, CNN architectures, challenges, applications, future directions
2021 · 7.095 Zit.