OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 23:43

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Large Language Model Architectures in Health Care: Scoping Review of Research Perspectives

2025·3 Zitationen·Journal of Medical Internet ResearchOpen Access
Volltext beim Verlag öffnen

3

Zitationen

3

Autoren

2025

Jahr

Abstract

Our study suggests that GPT-based models are better suited for communicative purposes such as report generation or patient interaction. BERT-based models seem to be better suited for innovative applications such as classification or knowledge discovery. This could be due to the architectural differences where GPT processes language unidirectionally and BERT bidirectionally, allowing more in-depth understanding of the text. In addition, BERT-based models seem to allow more straightforward extensions of their models for domain-specific tasks that generally lead to better results. In summary, health care professionals should consider the benefits and differences of the LLM architecture families when selecting a suitable model for their intended purpose.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationArtificial Intelligence in HealthcareMachine Learning in Healthcare
Volltext beim Verlag öffnen