Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Implementing large language model-based artificial intelligence (AI) technology in proposing effective treatment plans in patients with cancer.
3
Zitationen
4
Autoren
2024
Jahr
Abstract
e13660 Background: Language Learning Models (LLM), are programs that can respond to queries without previous exposure to the input. This is because the programs are actually generative mathematical models trained on hundreds of terabytes of textual data. The output is thus a statistical assessment of the corpus of human generated text likely indicating the correct answer. ChatGPT is an example of an LLM that can understand questions and synthesize responses based on probability. However Chat GPT’s potential to accurately assist physicians with a diagnosis, remains yet to be determined. The goal of this study is to elucidate AI’s potential in proposing both feasible and accurate treatment plans specifically for patients with cancer. Methods: Fifty patients at a medical oncology practice were recruited for this study. For each patient, the physician’s shorthand notes were recorded and organized. ChatGPT was then asked to come up with a potential treatment plan using these notes according to a predetermined set workflow for each individual. The responses were then reviewed by a medical oncologist for accuracy and feasibility via a qualtrics survey. Results: ChatGPT correctly proposed a treatment plan in agreement with the physician for a total of 18 cancer patients allowing for the agreement rate to be 36%. For the rest of the 32 patients, the physician disagreed with the proposed ChatGPT treatment plan on the basis of lack of personalization, inhibitive cost, incorrect recommendation, and greater alternative options. Thirteen patients fell into the ‘Lack of Personalization in Treatment' category, twelve in the ‘Incorrect Recommendation’ category, four in ‘Inhibitive Cost' category, and three in the ‘Greater Alternative Options’ category. Most patients however fell in the category of personalization, suggesting ChatGPT’s unfamiliarity with the subject of generating personal treatment plans. Conclusions: In the current study, ChatGPT was expected to create a treatment plan with only the clinician’s abridged shorthand notes and without any familiarity with the physician’s personal preferences for treatment. This knowledge gap can account for an agreement rate less than 50%. Despite its flaws, ChatGPT was, however, able to generate an adjusted and corrected version of the treatment plan when prompted further outside the set workflow. This highlights the invaluable role of clinical experience in synthesizing an accurate treatment plan. With its encroachment into the medical field, ChatGPT has shown potential in proposing feasible and accurate treatment plans, especially in the oncology clinical setting. Despite the low agreement rate obtained from this study, ChatGPT has shown promise in developing an agreeable treatment plan with the oversight of the attending physician.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.