Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
An Organized Approach to Using Large Language Models for Medical Information
0
Zitationen
7
Autoren
2025
Jahr
Abstract
INTRODUCTION: ChatGPT and other large language models (LLM) have increased in popularity. Despite the rapid rise in the implementation of such technologies, frameworks for implementing appropriate prompting techniques in medical applications are limited. In this paper we establish the nomenclature of "variable" and "clause" in the prompting of a LLM, while providing example interviews that outline the utility of such an approach in medical applications. METHODS: In this study assessing the LLM ChatGPT-4, we define terms used in prompting procedures including "input prompt," "variable," "demographic variable and clause," "independent variable and clause," "dependent variable and clause," "generative clause," and "output." This methodology was implemented with three sample patient cases from both a patient and physician perspective. RESULTS: As demonstrated in our three cases, precise combinations of variables and clauses that consider the patient's age, gender, weight, height, and education level can yield unique outputs. The software can do so quickly and in a personalized, patient-specific manner. Our findings demonstrate that LLMs can be used to generate comprehensive sets of educational material to augment current limitations, with the potential of improving healthcare outcomes as the use of LLM is further explored. CONCLUSION: The framework we describe represents a unique attempt to standardize a methodology for medical inputs into a large language model. Doing so expands the potential for outlining patient-specific information that can be implemented in a query by either a patient or a physician. Most notably, future projects should consider the specialty- and presentation-specific input changes that may yield the best outputs for the desired goals.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.611 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.504 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.025 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.