OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 25.03.2026, 10:04

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Cognitive alignment in cardiovascular AI: designing predictive models that think with, not just for, clinicians

2025·0 Zitationen·Frontiers in Cardiovascular MedicineOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

Introduction Artificial intelligence (AI) is emerging as a major driver of clinical innovation, with cardiovascular disease (CVD) prediction being one of its most active areas of application (Karatzia et al., 2022; Ashraf and Sultana, 2024). In recent years, hospitals, research centers, and health-technology companies have reported machine learning models achieving accuracy levels of 90%, 95%, or even higher for predicting heart attacks, arrhythmias, and other cardiovascular events, with concrete evidence shown in studies on AI-enabled ECG detection of left ventricular dysfunction and machine learning-based outcome prediction in heart failure (Yao et al., 2021; Pavlov et al., 2024). These results highlight the significant technical progress made in the field. Despite these encouraging statistics, adoption of AI tools in day-to-day cardiovascular practice remains limited (Bomfim et al., 2023). This raises a central question: if AI models demonstrate such high accuracy in controlled evaluations, why are they not widely used in clinical settings? This study explores that question by moving beyond accuracy as the sole measure of success. While many cardiovascular AI models are developed with strong technical performance—demonstrating high discrimination, well-calibrated risk estimates, robustness to data shifts, and external validation—their adoption in practice also depends on how well their outputs integrate into clinicians' established workflows and support decision-making under real-world time constraints and uncertainty. Even when demonstrating strong algorithmic performance—such as high discrimination (e.g., AUC), well-calibrated risk estimates, robust external validation, and resilience to moderate data shifts—many current AI models still fail to integrate with the way clinicians gather, interpret, and apply information during patient care (Reddy and Shaikh, 2025). These shortcomings often arise from interface design gaps, limited explainability, lack of EHR integration, and poor alignment with established clinical reasoning workflows. We refer to this gap as a lack of cognitive calibration—the degree to which AI tools reflect, augment, and support a clinician's reasoning process. To address this, we propose a Cognitive Alignment Index (CAI)—introduced here and detailed later—which evaluates models not only on statistical accuracy but also on comprehensibility, actionability, feedback receptivity, context awareness, and calibrated trust. Addressing cognitive alignment requires a shift in focus. Instead of designing models solely to outperform human performance on test datasets, the goal should be to create systems that enhance clinical reasoning in explainable, interpretable, and contextually relevant ways (Di Martino and Delmastro, 2023; Moreno-Sánchez, 2023). For CVD applications, this means moving from automation toward augmentation—from opaque, black-box predictions to collaborative, co-reasoned decision-making (Youssef et al., 2024). The Missing Link: Cognitive Alignment Current debates regarding trustworthy AI tend to focus on concepts like explainability, fairness, and robustness. These are important, but they mainly address models' extrinsic properties. Cognitive alignment, instead, investigates an internally compatible human-machine style of reasoning integration (Szabo et al., 2022). It asks: Do AI systems process and report information in such modes that clinicians will be in a position to intuitively comprehend, critique and act on? We can define cognitive alignment as the degree to which an AI system's reasoning processes, information presentation, and interaction patterns correspond to and enhance the cognitive processes clinicians use in real-world decision-making. This concept spans five core constructs—narrative coherence, counterfactual reasoning, progressive disclosure, uncertainty communication, and interactive collaboration—each contributing to a shared human–AI decision framework. While it overlaps with concepts such as explainability, usability, trust calibration, and shared mental models, cognitive alignment is distinct in its focus on mutual intelligibility and collaborative reasoning between clinician and model. Consider a 68-year-old with chest pain, discordant biomarkers and imaging, chronic kidney disease, diabetes, prior stroke, and limited access to follow-up care. The clinician must reconcile conflicting evidence, weigh competing risks, and factor in social constraints. A cognitively aligned AI could mirror this reasoning—integrating multimodal data, generating counterfactuals, and conveying calibrated uncertainty to guide a patient-specific decision. Contrast this with a typical CVD risk prediction model. It may take in a static data set, calculate probabilistic risk, and provide an output—e.g., 0.87 probability of cardiovascular event at 5 years. The model reveals little regarding why this score is high, what modifiable variables impacted it, or in what way it changes in response to new interventions. Many static, black-box risk scoring models— especially those trained on structured tabular datasets—present outputs solely as numeric probabilities without contextual explanation, making them less intuitive for clinical reasoning (DeGrave et al., 2023). In contrast, well-designed systems, including those using case-based reasoning, counterfactual analysis, or natural-language generation, can produce narrative rationales and patient-specific explanations that align more closely with how clinicians synthesize information (Pradeep et al., 2024). This mismatch is not abstract. It begets a disconnect in trust, usability, and accountability (Alharbi, 2024). Where an output from an AI raises doubt in a clinician's intuition, and no middle ground lies, the human decision-maker falls back to skepticism or dismissal. Worse, if clinicians over-rely on an opaque model, the result is over-reliance in faulty predictions—perilous in life-critical decisions (Mooghali et al., 2023). Recent empirical work underscores the importance of explainability and cognitive alignment. A systematic review of XAI in clinical decision support systems found that only a minority of applications formally evaluated explanation quality, highlighting trust as a critical, yet often underexamined, dimension (Salih et al., 2023; Shah et al., 2025). Another study demonstrated that cardiovascular event forecasting systems augmented with XAI increased user comprehension and decision-making confidence, achieving both high accuracy and improved usability (Bilal et al., 2025). These findings show that enhancements to interpretability directly improve trust and adoption— validating our argument that accuracy alone is not enough without cognitive alignment. Where Current Cardiovascular AI Falls Short The limitations discussed in this section refer primarily to classes of cardiovascular AI models that are (i) trained on static, cross-sectional datasets, (ii) optimized for predictive accuracy rather than interpretability, and (iii) deployed without advanced temporal modeling, multimodal integration, or narrative explanation capabilities. These constraints do not apply to all cardiovascular AI architectures—many state-of-the-art models in research settings already address some of these issues— but they remain common in tools currently used in routine clinical practice. In practice, widely deployed cardiovascular AI tools include FDA-cleared ECG algorithms for LV dysfunction (Attia et al., 2019), AI-guided echo acquisition (Narang et al., 2021), CT-FFR integrated into NHS pathways (Fairbairn et al., 2025), and EHR-based HF readmission models. In contrast, research prototypes— such as multimodal transformers combining ECG, echo, and EHR data (Mittal et al., 2023; Poterucha et al., 2025) or counterfactual imaging interpreters—remain largely academic. Recent work has demonstrated their potential, including improvements in arrhythmia detection (Zeljkovic et al., 2025), development of patient digital-twin frameworks (Anisuzzaman et al., 2025), and transformer-based atrial fibrillation risk prediction (Lisicic et al., 2024). Cognitive alignment gaps are most pronounced in deployed models, not these cutting-edge prototypes. The failure of many currently deployed cardiovascular AI models—particularly static tabular classifiers trained on cross-sectional datasets— to achieve cognitive alignment manifests in several critical areas:  Temporal Reasoning Deficiency: Clinicians reason over time, comparing past trajectories to future projections. Many cardiovascular AI tools currently deployed in clinical settings— particularly static tabular classifiers trained on cross-sectional snapshots of EHR data—lack temporal reasoning capabilities. These models reduce patient history to single time points, overlooking evolving physiological trends that clinicians use for decision-making. This limitation does not apply to temporal sequence models such as RNNs, LSTMs, Transformers, temporal convolutional networks (TCNs), survival analysis models like Cox/DeepSurv, or dynamical Bayesian/state-space approaches, which are explicitly designed to learn from longitudinal data (Kagiyama et al., 2019).  Opaque Abstractions: Clinicians prefer causal or mechanistic reasoning - "this patient's sedentary lifestyle, combined with family history, likely explains the elevated risk." In contrast, black-box models offer abstractions untethered from causal understanding (Vishwarupe et al., 2022). A high risk score may be mathematically correct, but without interpretive scaffolding, it remains clinically inert.  Disjointed Input and Output Modalities: Doctors process multimodal data—lab results, imaging, voice tone, visual signs. Most AI models require clean, structured inputs and output a single prediction (Van Der Vegt et al., 2023). This limits their ability to integrate into the messy, multimodal ecosystem of real clinical practice.  Static Decision Boundaries: In some deployments, binary classifiers are paired with fixed probability thresholds (e.g., intervene if risk > X%), which is often a policy choice rather than an inherent model property. While such cut points can simplify implementation, they may overlook trade-offs, comorbidities, patient preferences, and evolving clinical information. More flexible approaches—such as continuous risk estimates, decision-curve analysis, and context-aware policies that adapt thresholds to individual patient contexts—better reflect the nuanced nature of cardiovascular decision-making. It is important to acknowledge that recent advances in AI research have begun to address some of these limitations. Emerging models now incorporate temporal reasoning, causal inference, counterfactual simulation, and more sophisticated multimodal integration, enabling them to analyze evolving patient trajectories, draw connections between complex variables, and provide richer explanations (Yang et al., 2021; Engelhardt et al., 2024; Lin et al., 2025). These capabilities represent a significant step forward and demonstrate the technical feasibility of overcoming many past shortcomings. However, their presence in cutting-edge research does not yet equate to widespread adoption in clinical cardiovascular tools. Many AI systems currently deployed in hospitals or available in commercial products still operate with static inputs, limited interpretability, and rigid decision boundaries (Schepart et al., 2023; Fortuni et al., 2024). Thus, while technical progress is undeniable, the remains in these capabilities into used systems that align with clinicians' cognitive workflows and decision-making Cardiovascular AI Cognitive alignment that we how CVD prediction models are and are that define such  AI outputs should a Instead of the model patient's risk is elevated primarily to high and a recent in of these could the risk This with how clinicians risk to and must be to in et al., 2024). For models can be integrated with structured such as using or other to narrative rationales that predictions with clinically  of clinical reasoning is if the patient if they A cognitively aligned AI should support counterfactual clinicians to emerging models incorporate causal and counterfactual These should be integrated into CVD AI to not  than with all data at or it models should offer could be by to into data or This the way clinicians levels of information on the and  as a not a decision-making in is with uncertainty. AI models—particularly those without high probabilities even in of or when In contrast, well-calibrated models, such as or with can align probabilities with A cognitively aligned model should and and offer calibrated risk or et al., 2024). This clinicians to factor in model into their clinical  The future of CVD AI in interactive static but or that clinicians and model outputs in real These tools should learn not only from data, but from with human  dimension of cognitive alignment can be that for systematic in both and real-world clinical can be comprehension accuracy in reasoning and of reasoning may be by the accuracy and clinical of and the with which they to can be evaluated by in decision without of accuracy and the of model on may be the of decision on model and the in over-reliance on interactive can be by the of the of outputs clinician and the of when human–AI for the Cognitive Alignment Index The Cognitive Alignment Index how well an AI with clinicians' reasoning five comprehensibility, actionability, feedback receptivity, context awareness, and trust dimension is to enabling and Cardiovascular AI often as imaging, and are to temporal and These multimodal integration can be alignment including progressive of findings and to data can from data (e.g., model (e.g., interface (e.g., or (e.g., include decision and clinician with For and that explanations uncertainty estimates, and clinician or with the in the system's on its calibrated trust, remains to the system's demonstrated and clinical and The cognitive alignment design well beyond in and in AI is aligned to human is not in a an et al., AI a in the clinician's work not only but explainable, and to clinician at the cognitive can enhance Clinicians how and why a model a prediction will be more likely to use it This is in accountability for decisions made in an AI remains 2023; et al., 2025). cognitively compatible AI can be to et al., tend to be by opaque, models that Where AI systems can to in and learn to to user they and more to trust and interpretability, AI in cardiovascular care must complex and particularly patient and data with frameworks such as and in the or in requires that AI tools not only information but also data and et al., 2025). A in the for multimodal with the of data and 2024). for overcoming these include the adoption of such as and which model without patient data et al., 2023). these into the design of AI tools can while clinical both and Cardiovascular AI as a Cognitive AI is to its in cardiovascular it will have to shift from to cognitive that not of will require other than data or it will require a of in clinical reasoning et al., 2024). Alignment of is not an a and especially failure is like in is interpretive predictive made in by and Current systems, will not if they this cognitive ecosystem et al., 2023). This several typical a it the typical that accuracy is the and measure of AI While accuracy may be a not at all and 2024). A model that well on data but is or with day-to-day practice in depends on or not clinicians can interpret, and to the output of a than or not the model is predictive et al., it the automation narrative that still of AI Many clinical should Cardiovascular care and that no model can Instead of to human AI should to It should enhance confidence, and support counterfactual reasoning et al., 2024). Recent of multimodal cardiovascular AI systems have shown that such capabilities are imaging, physiological and clinical data into frameworks that improve both accuracy and clinician trust et al., In it should clinical not it for an design Cognitive alignment be in by data or AI alone et al., 2024). from in cardiovascular imaging this that the most between AI and and 2024). It the collaborative of cognitive and The goal is not to AI more in the but to it more clinically in the Cognitive alignment requires that even advanced models are and clinical remains et al., This must be into both design and uncertainty contextual and for clinician we must also that cognitive alignment is not a static It is an that with new data, new and new clinical et al., 2025). AI systems are deployed hospitals to ability to adapt to cognitive will their and The development of structured frameworks for such has with recent AI research cognitive performance and et al., To and must alignment into of the AI This cognitive into model that to cognitive and decision feedback clinicians can critique and model that measure usability, trust, and not performance These are not are central to the of AI in cardiovascular care. clinicians and a common cognitive AI more than a a in the often of human that may be the that AI to on its the limitations in current cardiovascular the cognitive alignment to address and the of these by the from current cardiovascular AI limitations to cognitively aligned highlighting the between gaps and design of current limitations in cardiovascular cognitive alignment and clinical Current on Cognitive Alignment Temporal Reasoning trained on data patient history and evolving trends temporal reasoning and sequence models survival models temporal convolutional networks robustness HF with clinicians' longitudinal Opaque outputs lack causal and in Static tabular with case-based counterfactual on and trust Disjointed ability to process real-world clinical data integration of and inputs models imaging model, multimodal risk prediction the data used in practice Static Decision may trade-offs, and context context-aware risk models, calibrated classifiers prediction risk models on HF and care of AI tools operate as output systems AI that from user feedback learning with feedback learning feedback and shared decision-making in advances not widely practice gap integration, adoption on real-world and adoption in practice design on clinical alignment score current cardiovascular AI to cognitively aligned the from limitations to The in the in the Despite advances in machine the of clinical are often from the AI development Most CVD models are trained in structured is clean, are and are et al., 2022). hospitals are data is often or and clinicians must decisions with This model and clinical a driver of cognitive et al., 2025). For a model may a patient as for heart failure on elevated and imaging However, in practice, the also of or social like of which are from the but and et al., 2024). A cognitively aligned model its contextual limitations and for human that is a narrative as as it is a numeric Clinicians must not only but also and care and 2025). AI models that this context are not clinically and the of Cardiovascular AI from human AI systems should be evaluated not only for what they but for how with them under and cognitive in and critical care show that even tools fail when they with human mental models. In the context of this means designing and outputs that align with clinical A risk prediction model in an EHR should do more than should provide progressive disclosure, contributing (e.g., past and for (e.g., if by explanations must be to user A may access to and trajectories, while a may prefer causal interpretability is from Alignment is evidence that even AI can when cognitive alignment is In a deployed a that in but in Clinicians developed to in In other such as or heart disease risk black-box tools have (e.g., or they in data, not they and 2021; 2021; 2025). These are not reflect a cognitive design The models not to adapt to clinician or in trust and clinicians of AI 2022). This is not a of is a for AI rather than it, clinicians AI without clinicians The of cardiovascular AI on this AI has a but often to technical on 2023). cognitive alignment requires not explanation, but AI models in a does this to a patient This interactive the model from a machine into a clinical systems must also learn from a clinician a model in the should learn to or This is the of models improve shared not a in a A a CVD risk model that the this patient the and for and A AI this and use it to future outputs in This is not of for and cognitively aligned AI is only the Clinicians must also be trained to with AI and This means AI into the of models, but the of machine decision-making. Clinicians must learn to does this model the data used of when with the do this output to the These are not are core to clinical practice in the AI In cardiovascular decisions often these are must a of AI feedback and should the critique of model outputs and create shared accountability AI is not used it can human capabilities. Cognitive Alignment as a of most models are by technical what and user We propose that AI in cardiovascular be evaluated on a Cognitive Alignment Index Do clinicians why the model the output to a real-world clinical the model incorporate user the model its use of the model or clinical statistical a more framework. and must their to alignment, not the Cognitive Alignment Index To systematic we propose a scoring points with higher cognitive  accuracy of clinician to structured with an understanding of why the model  of in to the  degree of model or structured clinician feedback during or  accuracy and of or uncertainty in  in clinician trust using a  with of human–AI reasoning alignment in  higher are with improved clinical (e.g., and human–AI performance (e.g., decision  over and when score the AI This is not a but a robust that can guide both research and real-world cardiovascular AI To from to an study we propose an both cognitive alignment and performance In this clinicians with patient cardiovascular or from longitudinal EHR datasets, making decisions with and without AI will include clinical decision-curve analysis, calibrated score for probabilistic as an and will cognitive using usability or and trust using For temporal reasoning, we will on longitudinal EHR data, robustness with controlled and shift and time This a for cardiovascular AI on both technical and cognitive alignment. to The for cognitively aligned AI is not a technical is a policy in AI must not only performance but adoption and systems must clinicians using this as not but frameworks must to acknowledge shared between and is when an aligned AI but and frameworks must reflect the nature of care. AI development not only in data, but in design aligned to one or may Cognitive alignment must to and to in While cognitively aligned cardiovascular AI can rather than such as automation clinician trade-offs, and shift must be and causal offer but and uncertainty calibration, only if clinicians are on them in real with these is for and Cognitive alignment should also be and adoption Alignment can as a pathways by and enabling EHR of such as and can patient data, but trade-offs, including accuracy increased and from the importance of integration into clinical and that both technical performance and clinician trust. uncertainty calibrated prediction or that AI outputs are with to support nuanced clinical The future of cardiovascular AI not in more data or but in models and and prediction and Cognitive alignment is not a it is a for AI that is and AI systems reason with and the of tools and in care. this requires designing models that reflect human reasoning, that clinicians to question rather than and systems on their for not In we shift AI from a to an from a black-box to an In the nuanced and of cardiovascular this shift could the between and prediction and or even between a risk score and a than only how our models we must how well they with in depends not on what we but on how we

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Machine Learning in HealthcareExplainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen