Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
CancerLLM: a large language model in cancer domain
1
Zitationen
10
Autoren
2026
Jahr
Abstract
Medical large language models (LLMs) perform well on medical NLP tasks, but lack models tailored for cancer phenotyping and diagnosis. Moreover, having tens of billions of parameters increases the computational burden in healthcare settings. To this end, we present CancerLLM, a 7-billion-parameter Mistral-style model trained on 2.7 M clinical notes and 515 K pathology reports across 17 cancer types, followed by fine-tuning on cancer phenotype extraction and diagnosis generation tasks. Our evaluation demonstrated that CancerLLM achieved strong performance on internal benchmarks, with F1 score of 91.78% on phenotyping extraction and 86.81% on diagnosis generation. It outperformed existing LLMs, with an average F1 score improvement of 9.23%. Additionally, the CancerLLM demonstrated its efficiency on time and GPU usage, and robustness comparing with other LLMs. We demonstrated that CancerLLM can potentially provide an effective and robust solution to advance clinical research and practice in cancer domain.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.150 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.543 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.047 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.410 Zit.
Analysis of Survival Data.
1985 · 4.379 Zit.