OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.04.2026, 05:00

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The Specialist’s Paradox: Generalist AI May Better Organize Medical Knowledge

2025·0 Zitationen·AlgorithmsOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2025

Jahr

Abstract

This study investigates the ability of six pre-trained sentence transformers to organize medical knowledge by performing unsupervised clustering on 70 high-level Medical Subject Headings (MeSH) terms across seven medical specialties. We evaluated models from different pre-training paradigms: general-purpose, domain-adapted, and from-scratch domain-specific. The results reveal a clear performance hierarchy. A top tier of models, including the general-purpose MPNet and the domain-adapted BioBERT and RoBERTa, produced highly coherent, specialty-aligned clusters (Adjusted Rand Index > 0.80). Conversely, models pre-trained from scratch on specialized corpora, such as PubMedBERT and BioClinicalBERT, performed poorly (Adjusted Rand Index < 0.51), with BioClinicalBERT yielding a disorganized clustering. These findings challenge the assumption that domain-specific pre-training guarantees superior performance for all semantic tasks. We conclude that model architecture, alignment between the pre-training objective and the downstream task, and the nature of the training data are more critical determinants of success for creating semantically coherent embedding spaces for medical concepts.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Machine Learning in HealthcareArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen