OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 21.04.2026, 12:17

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Towards Benchmarking Transformer Models for Biomedical Text Simplification

2026·0 Zitationen·Scientific journal of Mehmet Akif Ersoy University.Open Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2026

Jahr

Abstract

Biomedical texts typically contain a high level of technical terminology and complex sentence structures, which limits their comprehensibility for readers without domain expertise. Text simplification, a natural language processing problem, aims to transform complex texts into a more readable and accessible form while preserving their original semantic content. Especially in biomedical texts, simplification can play an essential role in making scientific information understandable to patients and the general public. In this context, this study investigates the text simplification performance of pre-trained general-purpose and domain-specific language models (PLMs) for biomedical texts. The experiments utilize the Cochrane-Simplification dataset, which comprises technical abstracts from systematic reviews and their corresponding plain language summaries. General-purpose models and summarization tuned variants (BART-Large, BART-Large-CNN, BART-Large-XSum, PEGASUS-Large, PEGASUS-XSum, T5 and FLAN-T5) are compared alongside domain-specific models (BioBARTv2-Large, SciFive, Clinical-T5) under comparable fine-tuning settings. The models were compared using ROUGE, BLEU, BERTScore and SARI metrics to measure textual similarity and semantic coherence. The results indicate that BART based models achieve superior performance in the medical text simplification task.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Text Readability and SimplificationTopic ModelingArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen