Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Role of Large Language Models (<scp>LLMs</scp>) in Breast Imaging Today and in the Near Future
2
Zitationen
4
Autoren
2025
Jahr
Abstract
This narrative review focuses on the integration of large language models (LLMs), such as GPT-4 and Gemini, into breast imaging. LLMs excel in understanding, processing, and generating human-like text, with potential applications ranging widely from decision-making to radiology reporting support. LLMs show promise in addressing current critical challenges, including rising demands for imaging services concurrent with an increasing shortage in the radiologist workforce. Their ability to integrate clinical guidelines and generate standardized, evidence-based reports has the potential to improve diagnostic consistency and reduce inter-reader variability. Emerging multimodal capabilities further extend their utility, enabling the integration of textual and visual data for tasks such as tumor classification and decision-making. Despite these advancements, significant challenges remain. LLMs often suffer from limitations such as hallucinations, biases in training datasets, and domain-specific knowledge gaps. These issues can affect their reliability, particularly in nuanced tasks like Breast Imaging Reporting and Data System categorization and multimodal image assessment. Moreover, ethical concerns about data privacy, biased outputs, and regulatory compliance must be addressed before effective deployment in the clinical setting. Current studies suggest that while LLMs can complement human expertise, their performance still lags behind that of radiologists in key areas, particularly in tasks requiring complex medical reasoning or direct image analysis. Looking ahead, LLMs are poised to play a crucial role in breast imaging by optimizing workflows, supporting multidisciplinary meetings, and improving patient education. However, their successful integration will depend on proper context training, robust validation, and ethical oversight, with human supervision as a crucial safeguard. EVIDENCE LEVEL: 5. TECHNICAL EFFICACY: Stage 2.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.