Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Transformer Language Models for Neurology Research with Electronic Health Records: Current State of the Science
0
Zitationen
3
Autoren
2025
Jahr
Abstract
This review provides an overview of the emergence and application of transformer-based language models in electronic health records in neurology. Transformer architectures are well-suited for neurological data due to their ability to model complex spatiotemporal patterns and capture long-range dependencies, both characteristic of neurological conditions and their documentation. We introduce the foundational principles of transformer models and outline the model training and evaluation frameworks commonly used in clinical text processing. We then examine current applications of transformers in neurology, spanning disease detection and diagnosis, phenotyping and symptom extraction, and outcome and prognosis prediction, and synthesize emerging patterns in model adaptation and evaluation strategies. Additionally, we discuss the limitations of current models, including generalizability, model bias, and data privacy, and propose future directions for research and implementation. By synthesizing recent advances, this review aims to guide future efforts in leveraging transformer-based language models to improve neurological care and research.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.218 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.589 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.386 Zit.