Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Large language models for improving cancer diagnosis and management in primary health care settings
6
Zitationen
2
Autoren
2024
Jahr
Abstract
Cancer remains a leading cause of death globally, but diagnosing and treating it is often challenging. Barriers such as multiple consultations, overburdened healthcare systems, and limited cancer-specific training among primary health care clinicians significantly delay diagnoses and worsen outcomes. To address these challenges, health care must enhance patient and clinician knowledge while minimizing diagnostic and treatment delays. Emerging technologies, particularly artificial intelligence (AI), hold great promise in revolutionising cancer care by improving diagnosis, education, and patient management. Large language models (LLMs) such as ChatGPT offer exciting potential to enhance cancer care in three key areas: clinical decision-making, patient education and engagement, and access to oncology research. Studies suggest that ChatGPT-4's oncology-related performance approaches that of medical professionals, enabling it to assist in decision-making, improve outcomes, and streamline cancer care. These tools can help clinicians rule out potential cancer diagnoses based on symptoms and history, reducing unnecessary tests and consultations. Additionally, specialised LLMs can provide accessible, understandable information for patients while disseminating cutting-edge research to clinicians. Despite their potential, LLMs face notable limitations. Output quality varies based on the type of cancer or treatment, the specificity of questions, and phrasing. Many LLMs produce responses requiring advanced literacy, limiting accessibility. Moreover, AI bias remains a concern; training on biased data could perpetuate healthcare inequalities, leading to harmful recommendations. Accountability is another critical issue—the ability for LLMs to produce errors in its outputs raise questions about responsibility, highlighting the need for safeguards and clear frameworks to ensure equitable and reliable AI integration into cancer care.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.380 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.243 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.671 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.496 Zit.