OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 14:41

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Intra- and Inter-Observer Reliability of ChatGPT-4o in Thyroid Nodule Ultrasound Feature Analysis Based on ACR TI-RADS: An Image-Based Study

2025·0 Zitationen·DiagnosticsOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2025

Jahr

Abstract

<b>Background/Objectives:</b> Advances in large language models like ChatGPT-4o have extended their use to medical image analysis. Accurate assessment of thyroid nodule ultrasound features using ACR TI-RADS is crucial for clinical practice. This study aims to evaluate ChatGPT-4o's intra-observer consistency and its agreement with an expert in analyzing these features from ultrasound image assessments based on ACR TI-RADS. <b>Methods:</b> This cross-sectional study used ultrasound images from 100 thyroid nodules collected prospectively between May 2019 and August 2021. Ultrasound images were analyzed by ChatGPT-4o, following ACR TI-RADS guidelines, to assess features of thyroid nodule including composition, echogenicity, shape, margin, and echogenic foci. The analysis was repeated after one week to evaluate intra-observer reliability. The ultrasound images were also analyzed by another ultrasound expert for the evaluation of inter-observer reliability. Agreement was measured using Cohen's <i>Kappa</i> coefficient, and concordance rates were calculated based on alignment with the expert's reference classifications. <b>Results:</b> Intra-observer agreement for ChatGPT-4o was moderate for composition (<i>Kappa</i> = 0.449) and echogenic foci (<i>Kappa</i> = 0.404), with substantial agreement for echogenicity (<i>Kappa</i> = 0.795). Agreement was notably low for shape (<i>Kappa</i> = -0.051) and margin (<i>Kappa</i> = 0.154). Inter-observer agreement between ChatGPT-4o and the expert was generally low, with <i>Kappa</i> values ranging from -0.006 to 0.238, the highest being for echogenic foci. Overall concordance rates between ChatGPT-4o and expert evaluations ranged from 46.6% to 48.2%, with the highest for shape (65%) and the lowest for echogenicity (29%). <b>Conclusions:</b> ChatGPT-4o showed favorable consistency in assessing some thyroid nodule features in intra-observer analysis, but notable variability in others. Inter-observer comparisons with expert evaluations revealed generally low agreement across all features, despite acceptable concordance for certain imaging characteristics. While promising for specific ultrasound features, ChatGPT-4o's consistency and accuracy still vary significantly compared to expert assessments.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationRadiomics and Machine Learning in Medical ImagingMeta-analysis and systematic reviews
Volltext beim Verlag öffnen