Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Investigating the Impact of Prompt Engineering on the Performance of Large Language Models for Standardizing Obstetric Diagnosis Text: Comparative Study
2024·24 Zitationen·JMIR Formative ResearchOpen Access
Volltext beim Verlag öffnen24
Zitationen
8
Autoren
2024
Jahr
Abstract
After applying LLMs to standardize diagnoses and designing 4 different prompts, we compared the results to those generated by the BERT model. Our findings indicate that QWEN prompts largely outperformed the other prompts, with precision comparable to that of the BERT model. These results demonstrate the potential of unsupervised approaches in improving the efficiency of aligning diagnostic terms in daily research and uncovering hidden information values in patient data.
Ähnliche Arbeiten
Autoren
Institutionen
Themen
Topic ModelingMachine Learning in HealthcareArtificial Intelligence in Healthcare and Education