OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 11:56

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

CONRep: Uncertainty-Aware Vision-Language Report Drafting Using Conformal Prediction

2026·0 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen

0

Zitationen

8

Autoren

2026

Jahr

Abstract

Automated radiology report drafting (ARRD) using vision-language models (VLMs) has advanced rapidly, yet most systems lack explicit uncertainty estimates, limiting trust and safe clinical deployment. We propose CONRep, a model-agnostic framework that integrates conformal prediction (CP) to provide statistically grounded uncertainty quantification for VLM-generated radiology reports. CONRep operates at both the label level, by calibrating binary predictions for predefined findings, and the sentence level, by assessing uncertainty in free-text impressions via image-text semantic alignment. We evaluate CONRep using both generative and contrastive VLMs on public chest X-ray datasets. Across both settings, outputs classified as high confidence consistently show significantly higher agreement with radiologist annotations and ground-truth impressions than low-confidence outputs. By enabling calibrated confidence stratification without modifying underlying models, CONRep improves the transparency, reliability, and clinical usability of automated radiology reporting systems.

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationRadiology practices and educationTopic Modeling
Volltext beim Verlag öffnen