OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 06.04.2026, 02:46

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

In Reply to Prashar and to Savage

2021·0 Zitationen·Academic Medicine
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2021

Jahr

Abstract

I thank Prashar and Savage for their interest in my commentary, and I agree with their thoughtful comments. They both discussed the “black box” nature of artificial intelligence (AI) and deep learning (DL) in medicine. By “black box,” it is meant that such models may be highly sophisticated, with a massive quantity of parameters. Given this model complexity, interpretability of what aspects of the data are being used to drive predictions is difficult to confidently assess. 1 The issue of interpretability connects to the data used for learning the model parameters, which goes beyond the complexity of the model itself. Specifically, there is a need to ensure that the training data do not have biases that will be transferred to the model (e.g., inadequacies of data from certain subgroups of people). It is essential to ensure that a sufficient quantity of data is available to capture the complete heterogeneity across all patients for whom these models are applied. A challenge on this front concerns what is meant by a “sufficient quantity,” as traditional “power” calculations for statistical models are unlikely to apply to complex DL models. Attention must also be placed on validation. Processes for carefully validating AI technology in medicine are essential and difficult, as often validation must be based on observational data 2 (randomized trials may not always be ethical or practical). Such validation is essential even when the “black box” issue is at least partially removed. Validation of AI technology has many of the same issues associated with the data used for model learning (e.g., the need to assure that the validation data are representative of all patients for whom these technologies will be deployed). Like all technologies introduced into medicine, care must be placed on fundamental understanding of how AI makes predictions (interpretability) and on validation. Because AI is such a data-driven framework, careful attention to the data used for learning and validation must be applied, and the need for this care must be embedded into medical education.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Radiomics and Machine Learning in Medical ImagingArtificial Intelligence in Healthcare and EducationMachine Learning in Healthcare
Volltext beim Verlag öffnen