OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.03.2026, 23:53

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Machine-Learning Implementation in Clinical Anesthesia: Opportunities and Challenges

2020·42 Zitationen·Anesthesia & AnalgesiaOpen Access
Volltext beim Verlag öffnen

42

Zitationen

2

Autoren

2020

Jahr

Abstract

Recent Food and Drug Administration (FDA) approval of the first autonomous, diagnostic system1 heralds the arrival of clinical machine learning (ML). ML is a promising form of artificial intelligence best suited to, but also necessary for, the predictive analytics required for clinical decision-making.2,3 ML focuses on the development of computer systems that can learn from big data (or data that are of such volume, collection velocity, or complexity that it is difficult or impossible to process using traditional methods),4 identify patterns, and make decisions with minimal human intervention.5 As ML tools begin to be designed and targeted for clinical anesthesia applications, there will be growing pressure for anesthesiologists to clarify when and how clinicians add value, versus when ML can (and perhaps should) augment clinical practice and clinical decision-making (Table).6 Table. - Potential Benefits, Challenges, and Current Limitations to Implementing ML Into Clinical Anesthesia Potential Benefits Potential Challenges Current Limitations Reduce clinician cognitive load Clinical skill atrophy: ML cannot “contextualize” to bedside care of individual patient - Maintenance of emergency manual skills - Maintenance of cognitive skills Reduction in costs of care Examination of impact on clinical work flow and work processes (eg, 737 Max) to prevent unintended safety events Manual tasks (ie, intubation, vascular access) not easily replaced by machine Increased access to care (eg, remote care delivery) Improved evidence supporting care recommendations, through big data and real-time analytics Impact on clinician autonomy and clinician–patient relationship Access to necessary big data still being established Bias in data and analysis can have unintended negative consequences Standardization of care (reduction in care variation between clinicians, clinical centers) Accountability for ML output or clinical actions undertaken as a result of ML output Emerging regulation: - Access to patient data: privacy protections and data ownership - Set standards to assess and evaluate ML accuracy - Legal liability Abbreviation: ML, machine learning. For more than half a century, progressively shorter-acting drugs and improvements in patient monitoring technologies fueled interest in anesthesia delivery as a target for automation.7 ML-guided anesthesia has already been piloted, including models of remifentanil and propofol interactions with processed electroencephalograms.8 In addition to a beneficial impact on quality, cost, and access to care, ML applications for clinical anesthesia will raise unique value-based, ethical challenges, and disrupt established workflow processes, raising safety concerns.9 Premature ML implementation causes patient harm.10 Clinical anesthesiologists are uniquely positioned to consider such systems as they are developed and implemented, working to promote the benefits of ML and reduce potential harms. As pioneers of patient safety, now is the time to consider how anesthesiologists should interact with, define our relationship to, and guide implementation of novel ML systems. Significant private investment,11 strong research interest, and compatibility with social goals of health care cost reduction all drive continued advancement of ML into health care, including clinical anesthesia.12 Health care collaborations, such as between DeepMind by Google and the United Kingdom National Health Service, Paige AI and Memorial Sloan Kettering, and Watson Oncology by International Business Machines (IBM) Corporation and MD Anderson, have all raised ethical concerns. Despite these challenges, global investment in ML for health care is predicted to reach $217 billion by 2028.11 To match the speed of development, both ethical and practical guidance for clinical ML implementation need to be conducted and provided contemporaneously. As anesthesiologists approach clinical ML implementation, 4 areas are important to consider: (1) impact on workflow; (2) skill atrophy; (3) accountability; and (4) clinician autonomy. IMPACT ON WORKFLOW First, the impact of ML on anesthesia clinical workflow and work processes requires extensive examination. Significant safety and judgment failures have already occurred around implementation of ML systems or output for work processes in non–health care contexts. Recent Boeing 737 Max and Tesla Model S crashes are attributable to inadequate assessment of the impact of automated systems on workflow and work processes. In both cases, operators’ lack of familiarity with the automated piloting systems and using them outside their intended design led to catastrophic adverse events.13,14 These failures have raised awareness about the potential for ML approaches to cause negative disruptive change within medicine.9 These include the potential for similar failures, particularly around clinician and patient interactions with ML systems, and with inadequate in situ assessment of the ML impact on operators and work processes, leading to patient harm.10 The dynamic and high-stakes clinical environment within anesthesia workflow is vulnerable. SKILL ATROPHY Second, as new technologies replace manual or cognitive tasks, consequent atrophy or loss of those skills occurs. In anesthesia, during which a patient’s life may depend on an anesthesiologist’s ability to retake control from an automated system, maintaining some clinical and cognitive skill will be necessary. Anesthesiologists’ overreliance on automated anesthesia machine “self-check” systems has led to patient harm when the automated check failed to identify circuit obstruction.15 Which clinical skills are paramount and need to be protected from loss should be determined and prioritized. Literature from non–health care, performance-based fields like aviation recognizes the growing challenges involved in maintaining critical emergency skills when operators are routinely functioning in progressively more automated contexts.16 Most concerning is that after practicing in largely automated contexts, while pilots’ manual skills to fly by hand largely remain intact (with only moderate, operationally significant “rustiness”), fundamental cognitive skills atrophied significantly, including awareness of plane location, ability to reference charts, to configure the airplane anew after passing important way-points on a planned route, and to recognize and deal with instrument system failures when they arose. Recommendations to address these problems all center on increasing pilots’ time practicing these skills, either through repeated simulations or through real-time practice.16 Unfortunately, cofollowing or coflying with an automated system appears to be ineffective at preventing cognitive skill atrophy, with accumulating evidence of the difficulty in pilots maintaining thoughts focused on the activities of an automated system that seldom fails.16 Simulator training has proven valuable for training anesthesiologists in crisis resource management, and later, performance in nonsimulated crises. However, simulation training to address ML implementation presents several challenges. The first is verisimilitude. For both pilots and clinicians, high-fidelity simulator training is necessary to maximize the likelihood that simulation training will cognitively transfer to real environments.17 Such simulators are costly to construct, maintain, and operate, and provide no guarantee of skill transfer. Because the clinician–computer interactions and points of interface for clinical ML are still being established, simulators and simulations will be unable to depict high-fidelity ML–clinician interactions until ML implementation is further established. An additional, more salient, concern is that scenarios included in simulation training are based on problems that will likely be recognizable to, predictable to (and ultimately addressable by), increasingly complex ML systems. By definition, simulation scenarios are preidentifiable as likely sources of clinical problems. The real problems anesthesiologists will face and be called to take over would be catastrophic unexpected events that may be difficult to train for without extensive direct clinical experience. This is similar to the performance differences seen between how military-trained or senior pilots were able to compensate for the errors with the Boeing 737 Max Maneuvering Characteristics Augmentation System (MCAS) while junior, simulator-trained pilots were not as easily able.13 Analyses of Capt Sullenberger’s landing of the Airbus A320 (US Airways Flight 1549) in the Hudson also showed the importance of experience and judgment relative to how less experienced pilots handled the same situation in simulation.18 Collecting the necessary knowledge of ML system failures to train clinicians for ML-related crisis training will take time. How much can be predicted from the aviation experience is unknown, but in abstraction, events like the 737 Max are already very valuable for identifying broad target areas. We should ensure clinician familiarity with ML systems before clinical deployments, rather than wait for failures to inform training. If our field decides that maintenance of direct, hands-on patient experience is needed to maintain clinical competency and the ability to take over, how many hours will be necessary, and what types of cases will need to be studied, as do implications for patient care (ie, how to decide whether a patient receives ML-supported anesthesia or provider-only anesthesia). The aviation field has long recognized that maintenance of skills requires more than simply logging the legally mandated number of flight hours in clear skies. Facing challenging flight conditions is also needed. ACCOUNTABILITY Third, increasing reliance on ML tools and patient “big data” will impact the physician-patient dyad that has constituted the ethical underpinning of the fiduciary caregiving relationship. This relationship is likely to even further shift into a relationship between patients and a learning health care system. Recent ethical concerns around ML applications also indicate that applications in health care could raise accountability concerns.9 Designers of autonomous systems for health care, such as diabetic retinopathy screening, have expressed willingness to assume responsibility for the output of a system (because it is, after all, intended to function autonomously).1 Because of the potential need for rescuing interventions, it is unlikely that anesthetic delivery systems would ever function fully autonomously, without clinician supervision. However, what accountability, and therefore liability, lies with the anesthesiologist versus with the ML system needs to be established. CLINICIAN AUTONOMY Fourth, the ongoing transition to systems-based anesthesia delivery, including broad adoption of electronic medical record (EMR) systems and ongoing transition to a shift-based work model, impacts clinician autonomy. ML application to clinical anesthesia has the potential to become the tipping point at which a quantitative difference in autonomy becomes a qualitative problem. Whether due to ML exceptionalism (the belief that a result is inherently better because it was produced by a computer) or because the operating room environment has become too data overwhelmed and clinicians too distracted, ML output may take on an authority never intended. As is already occurring with EMRs, anesthesiologists may find themselves progressively drawn into a clinical workflow focused on data entry, addressing data output, and reacting to alarms generated by algorithms rather than focusing on the actual patient. It is already recognized as a problem in non–health care fields that a person disagreeing with an ML-recommended action is often required to furnish far more and better quality evidence to rebut the ML output than the data used to generate the output. Such barriers to ML disagreement discourages workers from questioning algorithmic output.6 During the past 20 years, American health care has seen the rise of a nonclinical, executive class.19 What bedside clinicians are likely to most value in an ML tool is unlikely to match what the nonclinician purchasers of ML tools are likely to value. While clinical guidance provided by EMRs and potentially ML systems may improve aspects of care by increasing compliance with evidence-based approaches, ML-driven alarms and guidance may also be used to control a clinician workforce in pursuit of optimizing reimbursement-driven performance metrics and cost impacts of care choices on financial returns. CURRENT LIMITATIONS TO ML IMPLEMENTATION Currently, there are still significant limitations to ML-based anesthesia delivery. Manual tasks fundamental to the delivery of anesthetic care, such as intubation and vascular access, are not yet easily replaced by machines.7 Capture of the necessary big data on drug delivery and patient physiologic effects still needs to be established in order for ML-targeted drug delivery to improve on current pharmacokinetic and pharmacodynamics models.12 For clinical knowledge and decision support, with real-time access to current evidence-based data, ML systems are situated to recommend evidence-based clinical actions where data exist, with greater perspective than any individual clinician. However, such systems lack the ability to contextualize a clinical decision to the care of an individual patient. Currently, such systems are better deployed in support of clinician knowledge, rather than as clinician replacement. Capture and analysis of available data are still only, at best, observational data, with all of the inherent limitations of observational study design. Exploration of novel trial designs to integrate research with medical practice and learning health systems are underway.20 Approaches to regulating ML for health care are emerging, although far slower than the technology is changing. In the United States, FDA has recognized that its traditional paradigm of medical device regulation was not designed for adaptive artificial intelligence and ML technologies, and is currently designing procedures to guide premarket review of proposed clinical ML applications.3 In concept, such review will evaluate that an ML application performs as intended (ie, that the prediction the ML generates is accurate and any clinical action it undertakes or recommends is efficacious).1 The rising awareness of the need for access to a large patient data ecosystem to fuel ML development is being balanced against patient data privacy concerns. With Europe as the vanguard, legal reforms on data protection and privacy are underway in many countries. For example, the European Union (EU) has adopted a General Data Protection Regulation (EU 2016/679). Such reforms attempt to increase data subjects’ privacy options and introduce further controls on data uses. These regulations on access to data are covering not only data protection, but also the distribution of any benefits of the exploitation of personal data and the public acceptability of such exploitation (ie, the questions of whether patients have a stake in applications designed from their data and, such as with the Memorial Sloan Kettering-Paige AI, whether clinicians have intellectual property rights to their clinical interpretations of data [such as slide reads by pathologists] used to train ML applications). These evolving regulatory approaches will address important safety concerns around ML accuracy, patient data privacy protections, and data ownership. They will not address the workflow and human/ML interface challenges significant for the practice of anesthesia. In the near future, clinicians will likely collaborate with and manage ML systems that aggregate vast amounts of data, generate diagnostic and treatment recommendations, and assign confidence ratings to those recommendations. Systems have already been designed to leverage aggregate patient data for decision-making at the point of care. This integration expands the data to support clinical decisions beyond published studies or even raw data that could be available to an individual clinician. As the influence of ML on the practice of anesthesia approaches, we must thoughtfully and carefully examine how our field will address ML, what impacts we want ML tools to have on clinical anesthesia, what research on ML is needed, and how to anticipate and prevent potential harms to patients and clinicians. DISCLOSURES Name: Danton S. Char, MD. Contribution: This author conceived of the idea, co-wrote, and revised the manuscript. Name: Alyssa Burgart, MD. Contribution: This author co-wrote and edited the manuscript. This manuscript was handled by: Richard C. Prielipp, MD, MBA.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationCardiac, Anesthesia and Surgical OutcomesHealthcare Technology and Patient Monitoring
Volltext beim Verlag öffnen