Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Probabilistic Prompts for Zero-Shot and Few-Shot Large Language Models: An Empirical Study of Patient-Reported Outcomes
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Monitoring the severity of health symptoms during and after radiotherapy (RT) is critical to cancer patient care in clinical practice. However, existing machine learning and deep learning based predictive approaches either suffer high retraining costs or lack clinically acceptable explanations. To address these two major drawbacks, we investigate probabilistic prompts for zero-shot and few-shot Large Language Model (LLM) based predictive models to identify and track cancer patients with both mild and severe symptoms under anomaly detection settings. Leveraging designed probabilistic prompts, our approach enables accurate prediction despite limited patient-centered training data. Experimental results of prostate cancer patients with bowel pain health issues demonstrate the AD-LLM's capability to effectively classify severity of symptoms using these structured prompts. This method offers a feasible alternative for real-time symptom monitoring in RT, potentially improving timely intervention and patient outcomes in RT clinical practice.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.