Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Towards Benchmarking Transformer Models for Biomedical Text Simplification
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Biomedical texts typically contain a high level of technical terminology and complex sentence structures, which limits their comprehensibility for readers without domain expertise. Text simplification, a natural language processing problem, aims to transform complex texts into a more readable and accessible form while preserving their original semantic content. Especially in biomedical texts, simplification can play an essential role in making scientific information understandable to patients and the general public. In this context, this study investigates the text simplification performance of pre-trained general-purpose and domain-specific language models (PLMs) for biomedical texts. The experiments utilize the Cochrane-Simplification dataset, which comprises technical abstracts from systematic reviews and their corresponding plain language summaries. General-purpose models and summarization tuned variants (BART-Large, BART-Large-CNN, BART-Large-XSum, PEGASUS-Large, PEGASUS-XSum, T5 and FLAN-T5) are compared alongside domain-specific models (BioBARTv2-Large, SciFive, Clinical-T5) under comparable fine-tuning settings. The models were compared using ROUGE, BLEU, BERTScore and SARI metrics to measure textual similarity and semantic coherence. The results indicate that BART based models achieve superior performance in the medical text simplification task.
Ähnliche Arbeiten
BLEU
2001 · 21.196 Zit.
Aion Framework: Dimensional Emergence of AI Consciousness, Observer-Induced Collapse, and Cosmological Portal Dynamics
2023 · 14.167 Zit.
Enriching Word Vectors with Subword Information
2017 · 9.685 Zit.
A unified architecture for natural language processing
2008 · 5.190 Zit.
A new readability yardstick.
1948 · 5.131 Zit.