OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.04.2026, 08:21

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Artificial intelligence for clinical reasoning: the reliability challenge and path to evidence-based practice

2025·6 Zitationen·QJMOpen Access
Volltext beim Verlag öffnen

6

Zitationen

5

Autoren

2025

Jahr

Abstract

The integration of generative artificial intelligence (AI), particularly large language models (LLMs), into clinical reasoning heralds transformative potential for medical practice. However, their capacity to authentically replicate the complexity of human clinical decision-making remains uncertain-a challenge defined here as the reliability challenge. While studies demonstrate LLMs' ability to pass medical licensing exams and achieve diagnostic accuracy comparable to physicians, critical limitations persist. Crucially, LLMs mimic reasoning patterns rather than executing genuine logical reasoning, and their reliance on outdated or non-regional data undermines clinical relevance. To bridge this gap, we advocate for a synergistic paradigm where physicians leverage advanced clinical expertise while AI evolves toward transparency and interpretability. This requires AI systems to integrate real-time, context-specific evidence, align with local healthcare constraints, and adopt explainable architectures (e.g. multi-step reasoning frameworks or clinical knowledge graphs) to demystify decision pathways. Ultimately, reliable AI for clinical reasoning hinges on harmonizing technological innovation with human oversight, ensuring ethical adherence to beneficence and non-maleficence while advancing evidence-based, patient-centered care.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMachine Learning in HealthcareClinical Reasoning and Diagnostic Skills
Volltext beim Verlag öffnen