Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
MEDDOLLINA: A New Regime of Clinical Intelligence Beyond Generative Medical AI
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Recent advances in generative medical AI have delivered impressive fluency and broad medical knowledge. However, clinical reasoning is not a text-generation problem. Despite increasing scale and benchmark performance, generative systems continue to exhibit behaviour incompatible with real clinical use, particularly under uncertainty and longitudinal decision-making. These failures manifest as premature resolution, overconfident responses, and loss of clinical intent, posing significant risk in safety-critical settings. We introduce Meddollina, a clinical intelligence system designed for medical reasoning under ambiguity while preserving human authority and responsibility. Meddollina operates under Clinical Contextual Intelligence (CCI), characterised by persistent context awareness, intent preservation, bounded inference, and principled deferral when information is insufficient. Rather than optimising for fluent answers, the system is constrained to align with real clinical reasoning practice. We evaluate Meddollina using behaviour-first criteria across a large-scale (16,000+) clinical case evaluation, emphasising consistency, restraint, and appropriate handling of uncertainty over standalone accuracy. Under this clinically grounded, behaviour-first evaluation regime, Meddollina demonstrates a distinct form of clinically deployable medical intelligence, exhibiting consistently reliable, conservative, and contextually aligned behaviour compared to generation-centric medical AI systems. Notably, Meddollina exhibits stable reasoning across turns, conservative responses in underspecified scenarios, and a reduced tendency toward speculative output. These findings demonstrate that improvements in clinical intelligence cannot be achieved through generative scaling alone. Clinically deployable systems require objectives, constraints, and evaluation methods grounded in medical responsibility from the outset. Fluency alone is no longer a sufficient proxy for safe and effective medical reasoning. Meddollina is currently being made available through a controlled access and evaluation program to support responsible clinical assessment and research collaboration. Clinicians and researchers interested in evaluating Clinical Contextual Intelligence systems may register interest at: Join The Meddollina Waitlist
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.287 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.140 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.534 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.450 Zit.