OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.03.2026, 09:12

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Clinically Actionable Explainable AI in Pulmonary Arterial Hypertension: Endpoints, Calibration, and External Validation. Reply to Pagnoni et al. Toward Clinically Actionable Explainable AI in Pulmonary Arterial Hypertension: Endpoints, Calibration, and External Validation. Comment on “Ledziński et al. Personalized Medicine in Pulmonary Arterial Hypertension: Utilizing Artificial Intelligence for Death Prevention. J. Clin. Med. 2025, 14, 8325”

2026·0 Zitationen·Journal of Clinical MedicineOpen Access
Volltext beim Verlag öffnen

0

Zitationen

29

Autoren

2026

Jahr

Abstract

The present Reply addresses the commentary by Pagnoni et al. on our recent study exploring explainable artificial intelligence (AI) for mortality risk prediction in pulmonary arterial hypertension (PAH). We acknowledge the importance of several key issues raised by the authors, including endpoint selection, calibration, decision thresholds, and external validation, all of which are central to translating AI-based prognostic models into clinical practice. Our original endpoint, defined as death by the next follow-up visit, was driven by the structure of nationwide registry data and reflects real-world clinical workflows, although we recognize the advantages of predefined time horizons and time-to-event approaches for future analyses. We discuss the trade-off between sensitivity and precision, emphasizing our deliberate prioritization of minimizing false-negative classifications in high-risk patients, while acknowledging the need for structured clinical pathways to manage false-positive results. We further address calibration and threshold selection, underscoring the necessity of additional clinical studies to support intervention-oriented recommendations. The role of phenotypic determinants and modifiable risk factors in enhancing personalization is highlighted as a key direction for future model development. We reaffirm the value of SHAP-based explainability for improving model transparency, while recognizing the need for continued refinement and clinical validation. Finally, we emphasize the strengths and challenges inherent to registry-based analyses, the importance of external validation, and the need for methodologically sound comparisons with established risk calculators. Overall, this exchange underscores the critical role of interdisciplinary collaboration in advancing clinically actionable and interpretable AI solutions for PAH.

Ähnliche Arbeiten