OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 03.04.2026, 06:55

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Large language models for improving cancer diagnosis and management in primary health care settings

2024·6 Zitationen·Journal of Medicine Surgery and Public HealthOpen Access
Volltext beim Verlag öffnen

6

Zitationen

2

Autoren

2024

Jahr

Abstract

Cancer remains a leading cause of death globally, but diagnosing and treating it is often challenging. Barriers such as multiple consultations, overburdened healthcare systems, and limited cancer-specific training among primary health care clinicians significantly delay diagnoses and worsen outcomes. To address these challenges, health care must enhance patient and clinician knowledge while minimizing diagnostic and treatment delays. Emerging technologies, particularly artificial intelligence (AI), hold great promise in revolutionising cancer care by improving diagnosis, education, and patient management. Large language models (LLMs) such as ChatGPT offer exciting potential to enhance cancer care in three key areas: clinical decision-making, patient education and engagement, and access to oncology research. Studies suggest that ChatGPT-4's oncology-related performance approaches that of medical professionals, enabling it to assist in decision-making, improve outcomes, and streamline cancer care. These tools can help clinicians rule out potential cancer diagnoses based on symptoms and history, reducing unnecessary tests and consultations. Additionally, specialised LLMs can provide accessible, understandable information for patients while disseminating cutting-edge research to clinicians. Despite their potential, LLMs face notable limitations. Output quality varies based on the type of cancer or treatment, the specificity of questions, and phrasing. Many LLMs produce responses requiring advanced literacy, limiting accessibility. Moreover, AI bias remains a concern; training on biased data could perpetuate healthcare inequalities, leading to harmful recommendations. Accountability is another critical issue—the ability for LLMs to produce errors in its outputs raise questions about responsibility, highlighting the need for safeguards and clear frameworks to ensure equitable and reliable AI integration into cancer care.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationAI in cancer detectionArtificial Intelligence in Healthcare
Volltext beim Verlag öffnen