OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 20:39

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

In Reply: Machine Learning and Artificial Intelligence in Neurosurgery: Status, Prospects, and Challenges

2021·3 Zitationen·Neurosurgery
Volltext beim Verlag öffnen

3

Zitationen

3

Autoren

2021

Jahr

Abstract

To the Editor: We are grateful for Dr Lim's observations.1,2 We are similarly encouraged by the increasing number of academic studies involving artificial intelligence (AI) and machine learning (ML), as well as by the increasing attention these fields have received in the popular press. As in many of the interactions between medicine and emerging technologies, the uptake of these methods in the field of neurosurgery has been slow. Despite the impatience, which this sluggish pace cannot help but engender, the need to develop and test safe practices and careful review cannot be hastened. The question of bias in AI and related algorithms has parallels in the simpler model-based analyses commonly used in the medical literature. Such models also require training on a test set, careful validation on unseen data, and then ongoing review in clinical practice. The many iterations of the CHADS2 score (most recently, CHA2DS2-VASc) were initially developed to estimate stroke risk in nonvalvular atrial fibrillation, but used in practice to decide when to begin anticoagulation.3,4 CHADS2 score and similar model-based tools serve as a testament to the capacity of the medical literature to process highly quantitative data and translate them into dynamic clinical practice. To take its rightful place in the field of neurosurgery, AI and related algorithms must pass through a similar process, which relies on openness of both the method and underlying data. Proprietary models and algorithms such as the sepsis model unfortunately seem to circumvent this process and, for this reason, are subject to significant and inescapable limitations. Although we opted to omit Explainable AI in the review, it has the potential to become an essential part of the openness and transparency required by the medical literature. This nascent field aims to put deep learning and similar “black box” algorithms in the same category as traditional models in which the significance and influence of every variable is made immediately apparent. In addition to facilitating ongoing external review and improvement of published models, Explainable AI could significantly mitigate the potential for bias during model development. Nevertheless, those models that make it to clinical practice will still be subject to subsequent, meaningful, postmarketing surveillance. One of the complicating factors in this process is that the tools built to explain them will similarly need to be open and explainable, given the complexity of these systems. We wholeheartedly agree with the paradigm of expert scientists collaborating with clinicians to produce the next generation of clinical AI and ML algorithms. Similar to the collaborations that exist now between physician scientists and statisticians, these collaborations should symbiotically form the basis for creatively building new software and hardware tools. Currently, an unfortunate distinction is often made between data scientists and traditional statisticians. It is essential that traditional statistical rigor be applied to AI and ML approaches even as these newer technologies begin to take their rightful place in clinical practice. Funding JG is supported by NIH K08 CA230172. Disclosures The authors have no personal, financial, or institutional interest in any of the drugs, materials, or devices described in this article.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Acute Ischemic Stroke ManagementArtificial Intelligence in Healthcare and EducationAtrial Fibrillation Management and Outcomes
Volltext beim Verlag öffnen