Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Ethics, Artificial Intelligence, and Critical Care Nursing
0
Zitationen
1
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) and critical care nursing; what are you to make of it? There are hundreds of articles and news accounts telling us how AI will make our world much better or, alternatively, how it will destroy it. As a critical care nurse or unit leader, you have no input into developing the AI systems you might encounter in your practice and probably have little or no input into what systems will be used. Here, I will suggest that you can have an important role in ensuring that you and your patients are getting the best from AI while avoiding some of its problems. To do this, I will first clarify the terminology of AI and say something about how the systems are developed and thus how some of the ethical issues arise. Data bias has been identified as an important problem in AI, so it will serve as one example. Next, I will discuss the tools that you as a human person, a nurse with experience and knowledge and what I call nursing intuition, bring to the situation and that make all the difference.1The Organisation for Economic Co-operation and Development has defined an AI system as: “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”2(p4) An AI system is designed by software engineers who develop decision trees or algorithms and objectives for the system to use to predict outcomes or recommend decisions or answers in certain situations. Engineers make available massive amounts of pertinent data that the system can sift to identify answers to particular questions. The benefit is that a computer can be more rapid, thorough, and precise in such a review than people can. For example, when you google AI in health care, the system searches all its databases where AI and health care are found together and present them to you, usually in the order of the most recent and/or the most salient. In health care, enormous amounts of patient data regarding vital signs, electronic monitoring, laboratory values, and often individual patient medical records are stored in vast databases, some of which are specific to a particular health care system and some of which come from public databases such as the Centers for Disease Control and Prevention, the Behavioral Risk Factor Surveillance System, PubMed, OpenFDA, and the like.Machine learning adds a layer to the system. According to Mennella and Maniscalco,3 machine learning generates “new” knowledge by using data that are fed into a training set. Asked perhaps to identify factors that are associated with sepsis in patients, the system is given data from a vast number of patients who were diagnosed with sepsis along with all the possible laboratory and clinical record information available for such patients over a period of time preceding the diagnosis. The system then can sort through the data and perhaps identify a few signs that happen early before sepsis becomes full blown. For example, a decrease in body temperature can be an early sign of impending sepsis that could easily be missed by the health care team but can be identified by the machine learning program. A prompt to the care team can initiate treatment before the patient has full-blown sepsis and perhaps save a life.Deep learning adds another layer to the system.4 Here several levels of data are stacked on top of one another, mimicking the layering and function of the neural system in the human brain. In one layer, data are analyzed and shuffled then reanalyzed time and time again in the search for patterns or important features. It is worth noting that the patterns or important features to be searched for are initially identified by the programmer. In such a system, there are junctions where specific weights or biases are assigned, again often identified by the programmer. Data that meet the criteria are passed to the next lower level and again analyzed and reanalyzed in the search for other salient patterns. Vast amounts of data are available to the system, and the data become more sophisticated as they are filtered through each of these layers. An essential point: the data available to the system as well as the questions asked by the programmers and the patterns and features identified as important or unimportant by the programmer have an important impact on the quality of the answers that are achieved.Bias has been identified as a problem that affects some patients, particularly patients of color and vulnerable populations. Bias arises from a variety of sources. There may be biases held by the system designer, those that are inherent in the questions asked and the weights given to various data points as well as the specific kinds and amount of data fed into the system. First, we all have biases for and against ideas, products, and even people. Most of us are aware of some of our biases, but few if any of us are likely to be aware of all of them. Insofar as these biases around ideas, goals, groups, and so on are pertinent to the system to be designed, they may have an important impact on the recommendations the system will eventually offer. Second, the goal of the project is crucial because it determines in important ways the questions asked, the data chosen for inclusion, and how various data are weighted in the system. We are all aware of how limited we feel by questionnaires and surveys that fail to offer options that we would prefer to use. Many surveys are designed to get the answer the questioner is looking for, for example, those used for political or advertising purposes. The art and science of writing questions to elicit unbiased data are exceedingly complex and require considerable training, thought, and testing to ensure that the answers are valid. Next, how data are used at strategic decision points in the system is also important. For example, if the system is searching for diabetes and trying to identify persons at risk for diabetes, the issues of body weight, family history, and racial background would all most likely be given extra weight in the decision tree. However important data from people who are significantly underweight or whose racial background is unknown might then be excluded. Finally, there is the problem related to what kinds of data are available for use in these systems. For medical data, we have massive amounts of good data coming from reliable sources. However, there are important places where even these data may fall short. Early on, it was noted that women were largely excluded from cardiac research. That problem is now being rectified. However, vulnerable populations continue to be left out, not by design but by their lack of availability. Some groups have low input into medical research due to their distrust of the systems. Others may not be able to take time away from their regular duties to participate. Rural populations may live too far from research institutions to be able to participate. Even today, much medical research is heavily weighted toward White people and within that group, it is likely that urban and more affluent persons would end up being included. Thus, the quality of the input data may be quite high while still vastly incomplete, leaving some patients vulnerable to system bias.Furthermore, multiple other problems can arise with AI. For example, the “black box” problem can occur when a deep learning system, after initial training, is able to continue to refine its search with increasingly complex analyses and recommendations without the programmer being able to see what questions the system is asking itself or how the system is weighing its data. The programming engineer no longer has control of the process or the output. This lack of control can, then, lead to errors in analyses and recommendations. In such a situation, the system would need to be radically reworked. Another problem hitting the news is AI “hallucinations.” Like human hallucinations, the system takes real information but misinterprets it, perhaps because of an overload of false data. Again, the information given in its recommendations cannot be trusted. The result of all these situations is that either the system fails to identify a problem and offer a timely recommendation or it offers an erroneous recommendation. However, the principles discussed here will continue to apply.So, what does this mean for you? You have no input into the design of the systems or the data used in them. However, you as critical care nurses have 2 important qualities that could help you identify and perhaps mitigate difficulties that arise from these biases. First, you have awareness of problems that can occur both with AI systems and with their data. You can verify the data that are being used in system recommendations for your patient by monitoring the accuracy and completeness of the data in your patient’s medical records. You can pay particular attention to your patients who do not fit the common profile for the situation. Examples include patients with lung cancer who have no history of smoking or men with breast cancer. As with all technology, we cannot simply rely on the data the machine presents us, but we must also assess whether the data fit the reality in front of us.Second and most important, critical care nurses are human persons. As a person, you bring some of the cognitive skills but also much more to the situation. You have or can develop a well-trained sense of intuition. Deep learning AI systems attempt to mimic the radically complex layers of analysis we use daily to evaluate and solve problems. Such systems, however, are radically limited to cognitive processes and algorithms. They can both give erroneous recommendations and fail to see important changes happening. First, you have significant embodied knowledge as a nurse; that knowledge is so deep in your mind, your body, and your emotions that it has become second nature. Next, you have developed your sensory perceptions to attend to even the most subtle details of the whole situation; patient, clinical data, family, and environment all inform your thinking and your actions. Machines cannot see the patient frown or notice pain in the patient’s eyes. They cannot notice subtle discolorations or changes in skin texture that you may spot as you help the patient bathe or change position. Machines cannot detect odors of bacterial or altered body chemistry that may show up before a laboratory test has even been ordered. All these data are outside the purview of machine learning. Critical care nurses’ observations put together with good AI can help improve patient outcomes. I think here of Winston Churchill speaking about his role in World War II or Sully Sullenberger, who successfully landed a jet plane on the Hudson River, saving the lives of all aboard. Both realized that everything they had done before, even the mistakes they had made, enabled them to accomplish what they did. Similarly, you possess a wealth of experience and knowledge in contemporary critical care!Artificial intelligence is here to stay. It can be a successful adjunct to good critical care. However, alone it can never do what well-educated, committed professional nurses do. Your role, as always, is to pay attention to all of the information you are receiving from increasingly diverse sources as well as to your own intuition to evaluate your patient situation. Do not assume the system will identify all the problems that might be arising. If you find you are inclined to disagree with an AI recommendation, begin by trying to see why you have taken that position. Do not distrust yourself. Bring it to the attention of your colleagues as you try to achieve more clarity. Clearly AI adds yet another level of complexity to your already radically complex world, but as nurses, you can use this technology to improve the care of your critically ill patients.
Ähnliche Arbeiten
The Strengths and Difficulties Questionnaire: A Research Note
1997 · 14.568 Zit.
Making sense of Cronbach's alpha
2011 · 13.745 Zit.
QUADAS-2: A Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies
2011 · 13.582 Zit.
A method for estimating the probability of adverse drug reactions
1981 · 11.464 Zit.
Evidence-Based Medicine
1992 · 4.139 Zit.