Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AILABUD at the NTCIR-17 MedNLP-SC Task: Monolingual vs Multilingual Fine-tuning for ADE Classification
0
Zitationen
5
Autoren
—
Jahr
Abstract
The AILAB team participated in the Social Media subtask of the NTCIR-17 MedNLP-SC Task.This paper reports our approach to solving the problem and discusses the official results.The presented model performs binary classification of the tweets and, given an UMLS term, determines whether it is present as an ADE in the tweet.Due to this design, it does not need an intermediate ADE extraction step, and it can be extended to new UMLS terms currently not present in the text.The base model used in the experiments is multilingual SapBERT, which was fine-tuned in a monolingual and multilingual setting.The best results were achieved by training the model on multilingual data.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.384 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.719 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.434 Zit.