Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Development of CanPrompt Strategy in Large Language Models for Cancer Care
3
Zitationen
4
Autoren
2024
Jahr
Abstract
Background: The recent revolution in Large Language Models (LLMs) is transforming industries, enhancing communication, and reshaping research methodologies. LLMs have found significant applications across various sectors, notably in finance for stock market predictions, and in healthcare, where complex medical data is analyzed for diagnosis at an early stage, improving diagnostic procedures, and personalized treatment planning. In healthcare, where complex medical data is analyzed for diagnosis at an early stage. Despite the immense potential, challenges such as overwhelming Big Data, model hallucinations, and ethical concerns about patient privacy and bias persist. Method: We implemented novel strategies like CanPrompt to mitigate the accuracy and hallucination concerns to ensure responsible deployment. The CanPrompt strategy utilizes prompt engineering combined with few-shot and in-context learning to significantly enhance model accuracy by generating more relevant answers. The models were tested against a specialized dataset from MedQuAD, focusing on cancer, and evaluated using metrics like ROUGE and BERTScore to assess the semantic and syntactic accuracy of generated responses against validated "Gold Answers". Through this approach, the study seeks to outline the potential and limitations of LLMs in improving cancer care. Result: After applying CanPrompt with models Mistral 7x8b, Falcon 40b, and Llama 3-8b, BERTScore results showed Mistral leading with an accuracy around 84%, Falcon slightly lower, and Llama the least, with respective precision scores also reflecting a similar trend. Conclusion: The study demonstrates the promise of LLMs in cancer care through the introduction of CanPrompt.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.