Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A practical guide for nephrologist peer reviewers: evaluating artificial intelligence and machine learning research in nephrology
9
Zitationen
10
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) and machine learning (ML) are transforming nephrology by enhancing diagnosis, risk prediction, and treatment optimization for conditions such as acute kidney injury (AKI) and chronic kidney disease (CKD). AI-driven models utilize diverse datasets-including electronic health records, imaging, and biomarkers-to improve clinical decision-making. Applications such as convolutional neural networks for kidney biopsy interpretation, and predictive modeling for renal replacement therapies underscore AI's potential. Nonetheless, challenges including data quality, limited external validation, algorithmic bias, and poor interpretability constrain the clinical reliability of AI/ML models. To address these issues, this article offers a structured framework for nephrologist peer reviewers, integrating the TRIPOD-AI (Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis-AI Extension) checklist. Key evaluation criteria include dataset integrity, feature selection, model validation, reporting transparency, ethics, and real-world applicability. This framework promotes rigorous peer review and enhances the reproducibility, clinical relevance, and fairness of AI research in nephrology. Moreover, AI/ML studies must confront biases-data, selection, and algorithmic-that adversely affect model performance. Mitigation strategies such as data diversification, multi-center validation, and fairness-aware algorithms are essential. Overfitting in AI is driven by small patient cohorts faced with thousands of candidate features; our framework spotlights this imbalance and offers concrete remedies. Future directions in AI-driven nephrology include multimodal data fusion for improved predictive modeling, deep learning for automated imaging analysis, wearable-based monitoring, and clinical decision support systems (CDSS) that integrate comprehensive patient data. A visual summary of key manuscript sections is included.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.
Autoren
Institutionen
- Zhejiang University(CN)
- Second Hospital of Anhui Medical University(CN)
- Sir Run Run Shaw Hospital(CN)
- WinnMed(US)
- University Hospitals of North Midlands NHS Trust(GB)
- Phramongkutklao College of Medicine
- Phramongkutklao Hospital(TH)
- Medical University of South Carolina(US)
- Zhejiang University of Science and Technology(CN)
- Shaoxing University(CN)