Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Clinical applications of continual learning machine learning
237
Zitationen
2
Autoren
2020
Jahr
Abstract
With advances in artificial intelligence (AI), particularly in machine learning and deep learning, the potential uses for AI in medicine are growing. Continual learning, also known as lifelong learning or online machine learning, is a fundamental idea in machine learning in which models continuously learn and evolve based on the input of increasing amounts of data, while retaining previously learned knowledge.1Parisi GI Kemker R Part JL Kanan C Wermter S Continual lifelong learning with neural networks: a review.Neural Netw. 2019; 113: 54-71Crossref Scopus (372) Google Scholar This dynamic process of supervised learning allows the model to incrementally learn and autonomously change its behaviour, while not forgetting the original task. The recommender systems used by companies such as Netflix and Amazon are well known examples of continual learning. These systems instantly gather new labelled data as people interact with the model output and adjust accordingly.2Portugal I Alencar P Cowan D The use of machine learning algorithms in recommender systems: a systematic review.Expert Syst Appl. 2018; 97: 205-227Crossref Scopus (221) Google Scholar In medicine, a continual learning model (previously trained with labelled stationary data from other patients) would ideally assist the clinician by doing tasks, such as providing diagnoses or making management decisions. New patient data and the results of previous tasks (actual diagnoses or treatment outcomes) would be introduced to the model, which would then transfer its previous knowledge to the new data, fine-tune its current task, or even incrementally learn new tasks. Although continual learning machine learning systems sound ideal for medical reasons, in practice, many longstanding challenges exist in applying them.3Hassabis D Kumaran D Summerfield C Botvinick M Neuroscience-inspired artificial intelligence.Neuron. 2017; 95: 245-258Summary Full Text Full Text PDF PubMed Scopus (381) Google Scholar One main obstacle is catastrophic forgetting (or catastrophic interference phenomenon), in which the new information interferes with what the model has already learned. This occurrence can lead to an abrupt decrease in performance while the new data is being integrated or, even worse, an overwrite of the model’s previous knowledge with the new data.4McClelland JL McNaughton BL O'Reilly RC Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory.Psychol Rev. 1995; 102: 419-457Crossref PubMed Scopus (2993) Google Scholar, 5McCloskey M Cohen NJ Catastrophic interference in connectionist networks: the sequential learning problem.Psychol Learn Motiv. 1989; 24: 109-165Crossref Scopus (919) Google Scholar Most of the current applications for continual learning in non-medical fields are less critically affected by this limitation.2Portugal I Alencar P Cowan D The use of machine learning algorithms in recommender systems: a systematic review.Expert Syst Appl. 2018; 97: 205-227Crossref Scopus (221) Google Scholar Continual learning models in health-care settings address many heterogeneous problems that need multiple complex tasks. Moreover, although this is not unique to medicine, the stakes for real-time medical applications of AI are high because of their effect on health outcomes. A simple solution to catastrophic interference is to completely retrain the model every time new data are available, but this process can be computationally expensive and inhibit real-time inferences. Advances in cloud computing could provide a solution to this problem but, currently, accelerated computational resources compliant with the Health Insurance Portability and Accountability Act are complex to create legally and are difficult to maintain securely. Health-care information governance across different countries is constantly evolving, making maintaining compliance difficult. Furthermore, the availability of retrospective training sets needed to fully retrain the model with new data is especially challenging in health care because of consent for use constraints. For these reasons, online training methods that do not involve full retraining (but rather use of new data only) are probably more realistic in the health-care setting. Current applications of machine learning and deep learning in medical research have been mostly restricted to supervised learning, whereby a focused task (eg, classification or segmentation of images) is trained using labelled data.6Lee CS Baughman DM Lee AY Deep learning is effective for the classification of OCT images of normal versus age-related macular degeneration.Ophthalmol Retina. 2017; 1: 322-327Summary Full Text Full Text PDF PubMed Scopus (209) Google Scholar, 7De Fauw J Ledsam JR Romera-Paredes B et al.Clinically applicable deep learning for diagnosis and referral in retinal disease.Nat Med. 2018; 24: 1342-1350Crossref PubMed Scopus (747) Google Scholar To date, only a few automated algorithms have been approved by the US Food and Drug Administration (FDA) for limited capacities, such as detection of diabetic retinopathy or breast abnormalities.8Abràmoff MD Lavin PT Birch M Shah N Folk JC Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices.NPJ Digit Med. 2018; 1: 39Crossref PubMed Scopus (127) Google Scholar All these algorithms have been locked for safety, to prevent any potential for further learning or change post approval.8Abràmoff MD Lavin PT Birch M Shah N Folk JC Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices.NPJ Digit Med. 2018; 1: 39Crossref PubMed Scopus (127) Google Scholar However, continual learning (ie, unlocked) machine learning models might be more advantageous because they are able to incrementally learn from their mistakes and fine-tune their performance with progressively more data, similar to the ways that human clinicians learn. There are specific areas within clinical medicine in which continual learning machine learning models could be safely implemented. One example is diagnostic testing, but the labelling of new data would be a rate-limiting step. When new patient data become available, the trained model would perform inference and make a diagnostic call. The new data would also need to be manually graded using the reference standard, and the results would then be used to update the model (figure, A). Manual image grading is a time-consuming step that will limit the overall use of an automated AI algorithm, since all new incremental data will need human input to produce reliable labels, but the performance of the model as it learns would not directly affect patients’ outcomes. Continual learning machine learning models could also be used for predictive analytics, in situations whereby clinical outcomes can be automatically obtained and fed into the algorithm (figure, B). For example, if a model were to predict a critical clinical outcome such as all-cause mortality within 3 months, then at 3 months the actual clinical outcome would be used to update the model. Since the standard of care would not change, this scenario is a safer setting in which to test continual learning algorithms, with the added benefit that manual grading would not be necessary. Ultimately, if the model’s performance improves and surpasses expert predictions, then it might seem reasonable to integrate the model’s output into the clinical care pathway. It is important to note that before the model’s predictions are used to change clinical decisions, a prospective randomised clinical trial should be done to compare against the standard of care. Moreover, the performance of the model could be affected, because the model will need to readapt with changing care paradigms. The ultimate goal is for continual learning models to do just that: optimise clinical management decisions in real time. For example, AI models could be combined with therapeutics to provide the optimum drug dosing and combination of drugs for individual patients, or they could help control the ventilator settings of intubated patients in critical care units.9Ghassemi MM AlHanai T Westover MB Mark RG Nemati S Personalized medication dosing using volatile data streams.https://www.aaai.org/ocs/index.php/WS/AAAIW18/paper/viewPaper/17234Date: June 20, 2018Date accessed: April 30, 2020Google Scholar In these situations, the model would be making active clinical decisions and attempting to optimise the eventual clinical outcome of the patient, which can lead to potential complications. Once the AI model output becomes fully integrated into the clinical management decision, a delay will occur in assessing the ultimate outcome for each participant (figure, C). Patients could potentially be harmed by erroneous versions of the model as it changes and updates. Furthermore, when these models are used in real time, a separate set of aggregated data does not exist with which to test the model’s safety, since the model is directly affecting the clinical outcome. Other relevant challenges need consideration before implementing continual learning models in the clinical arena. First, no established methods exist for assessing the quality of these models. After the initial launch of the model and evaluation of its performance using traditional metrics, other factors (eg, the collection process for new data, the automated organisation or labelling of new data, the knowledge transfer between new and original data, and the overall performance of the model after incorporating data) would all need to be validated while ensuring that no catastrophic interference occurred. Second, the regulatory challenges will be substantial. A 2019 white paper from the FDA10US Food and Drug AdministrationProposed regulatory framework for modifications to artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD): discussion paper and request for feedback.https://www.fda.gov/files/medical%20devices/published/US-FDA-Artificial-Intelligence-and-Machine-Learning-Discussion-Paper.pdfDate: January, 2019Date accessed: April 30, 2020Google Scholar shows that a new framework is needed for allowing AI algorithms that, by their very nature, will continuously update and change after they are approved. Third, the use of these models for clinical applications will require the acceptance of everyone involved in medical care. There is no fail-safe AI model, because the entire premise of continual learning models is that they improve because they make mistakes, and systems must be established to respond when errors occur. Finally, continual learning models will have to merge clinical data from large numbers of patients, which can lead to privacy concerns. There is enormous potential for the use of continual learning AI models in the practice of medicine, but this technology should be implemented cautiously, beginning with lower risk applications. Results from lower risk cases can be used to make regulatory guidelines and establish systems for addressing problems as they arise. As with any new technology, careful risk management will be essential, but the potential benefits of this powerful method are impressive and might ultimately change the practice of medicine. AL is employed by the US Food and Drug Administration (FDA) and declares grants from Santen, Carl Zeiss Meditec, and Novartis, and personal fees from Genentech, Topcon, and Verana Health, outside of the submitted work. CL declares no competing interests. Financial support was received from the National Institutes of Health National Eye Institute ( K23EY029246 , R01AG060942 ) and an unrestricted grant from Research to Prevent Blindness. This Comment does not reflect the opinions of the US Government or of the FDA.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.