Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Beyond Accuracy: Multidimensional Evaluation of Large Language Models in Hepatocellular Carcinoma Management Emphasizing Prompting
0
Zitationen
8
Autoren
2025
Jahr
Abstract
Abstract Background & Aims Hepatocellular carcinoma is the most common type of primary liver cancer and remains a major global health challenge. In resource-limited settings, patients often face barriers such as low screening rates, poor adherence, and limited access to medical information. Despite comprehensive clinical guidelines, issues like inadequate patient education and ineffective communication persist. While large language models show promise in clinical communication and decision support, their performance in hepatocellular carcinoma management has not been systematically evaluated across multiple dimensions. Methods Ten emerging language models, including general-purpose and medical-domain models, were assessed under prompted and unprompted conditions using a standardized question set covering five key stages: general knowledge, screening, diagnosis, treatment, and follow-up. Accuracy was rated by experts, while semantic consistency, local interpretability, information entropy, and readability were measured computationally. Results ChatGPT-4o and Grok-3 achieved the highest accuracy (2.62 ± 0.06, 93%; 2.60 ± 0.06, 95%) and interpretability (0.43;0.43). Prompting significantly improved accuracy ( p < 0.001) and interpretability ( p < 0.001) across all models. Semantic consistency declined slightly in most models; information entropy generally increased; readability changes varied. Conclusions This study presents the first multidimensional evaluation of large language models in hepatocellular carcinoma–related clinical tasks. General-purpose models outperformed some medical models, revealing limitations in domain-specific fine-tuning. Prompt design strongly influenced model performance. Further research should integrate diverse prompt strategies and clinical scenarios to improve the usability of language models in real-world oncology settings. Lay summary This study evaluated how well-advanced language-based artificial intelligence models can answer clinical questions related to hepatocellular carcinoma. The results showed that some models, especially when guided with structured instructions, provided accurate and understandable responses. These findings suggest that such tools may help improve communication and access to information for both doctors and patients managing liver cancer.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.391 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.257 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.685 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.501 Zit.