Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Leveraging artificial intelligence for decision-making in pediatric progressive and refractory solid tumors
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Pediatric patients with progressive and refractory solid tumors face a challenging prognosis. Despite advancements in treatments like immunotherapy and targeted therapy, survival rates remain low for certain tumor types. Decision-making in these complex cases often necessitates a multidisciplinary approach, integrating risk-based management, precision medicine, and access to clinical trials. Artificial intelligence (AI) technologies, particularly large language models (LLMs), hold promise for improving clinical reasoning and decision support in pediatric oncology. This study evaluated the decision-making capabilities of five AI tools-ChatGPT, Gemini, Claude, Perplexity, and OpenEvidence-in six hypothetical cases of refractory or progressive pediatric solid tumors. Each AI tool was presented with two sequential queries: a request to generate potential treatment options and then a request to identify and justify the most appropriate option from its initial list. The AI tools generated a total of 124 treatment recommendations, with an average of 24.8 per tool. Clinical trial enrollment was the most frequently selected "best option," accounting for 55.2% of cases. Other notable recommendations included targeted therapy (17.2%), surgery (10.3%), chemotherapy (10.3%), best supportive care (10.3%), and immunotherapy (3.4%). Notably, the AI tools exhibited distinct tendencies in their decision-making approaches, with some favoring aggressive interventions and others emphasizing supportive or palliative care. AI tools demonstrate potential for assisting with complex treatment decisions in pediatric oncology, particularly by identifying clinical trial options. However, the observed variability in recommendations underscores the need for careful human oversight to ensure that AI-generated suggestions align with clinical evidence, patient and family preferences, and the overall goals of care. Future research should explore how AI tools can be further refined to incorporate nuanced patient-specific information and address the emotional and psychological impact of AI-assisted decision-making.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.287 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.140 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.534 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.450 Zit.