OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 00:38

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Navigating the impact of artificial intelligence on our healthcare workforce

2024·1 Zitationen·Journal of Clinical NursingOpen Access
Volltext beim Verlag öffnen

1

Zitationen

3

Autoren

2024

Jahr

Abstract

Artificial intelligence (AI) has become highly topical, finding application in many aspects of life, including healthcare, to support screening, diagnostics, treatment planning, surgery, patient care, education and research. Artificial intelligence can increase work efficiencies, streamline processes, and analyse large global datasets quickly, accurately and cheaply. However, ethical, legal and professional concerns exist, including inherent biases, lack of transparency, and data confidentiality issues (Rowe et al., 2022). As with many emerging technologies, opinions are divided, with some believing AI will create new employment opportunities and relieve skills shortages, while others view AI as a threat to their privacy, job security, workplace autonomy, and professional identity (Aquino et al., 2023; Gillespie et al., 2023; Rowe et al., 2022). Automation and AI have historically been best suited for industries involving repetitive tasks, such as manufacturing, banking, and information technology (Aquino et al., 2023). The human-centred ‘caring professions’ like healthcare were perceived to be more resilient to the application of AI (Rowe et al., 2022). However, given the shortage of health professionals, the automation of certain healthcare tasks may increase access to healthcare (Aquino et al., 2023; Rowe et al., 2022). Artificial intelligence is integrating into many aspects of daily life, for example, virtual personal assistance embedded in devices such as watches and smartphones. These can capture personal physiological and physical activity data and, in real-time, analyse and interpret the data using AI systems (Davis et al., 2022). This data can then alert the user, caregivers and health professionals of an impending health event (Davis et al., 2022). Artificial intelligence-based systems are likely to become future life mentors and possibly even medical service providers, interpreting personal information based on a vast collective dataset at high speeds and low cost without the intervention of health professionals. However, Rowe et al. (2022) warn that AI cannot account for cultural, social and ethical factors to provide a contextualised judgement, and therefore, human intervention is still required. There has been much discussion on the potential of workforce deskilling due to the implementation of AI (Duran, 2021; Ross & Spates, 2020). In the context of AI, deskilling is the loss of skills after tasks have been automated, contributing to a reduction in discretion, loss of autonomy (Ross & Spates, 2020) and possibly professional identity (Strich et al., 2021). Some believe AI decision-making in the workplace is acceptable but want to retain control and view AI as a decision-support tool (Gillespie et al., 2023). In a study involving healthcare professionals, Yoo et al. (2023) reported that some clinicians believe the integration of AI has the potential to personalise patient treatment, increase safety and reduce workloads because staff would be required to make fewer decisions. Participants stated that if their clinical decisions diverged from the AI recommendation, they would question their own judgement and consult with peers (Yoo et al., 2023). Other clinicians in the same study were sceptical about using AI, voicing fears of loss of autonomy, deskilling and not being offered the opportunity to develop and practice skills. Artificial intelligence can learn, analyse, interpret and make complex decisions (Strich et al., 2021), but the reliability of the system depends on the accuracy of the underpinning algorithm (Ross & Spates, 2020). Furthermore, all decisions or recommendations involve a moral responsibility, and there is the risk of moral deskilling if AI is allowed to make decisions on behalf of humans (Duran, 2021). Ross and Spates (2020) state that AI was intended to support clinical decision-making, not replace the role of healthcare professionals. The use of technology in decision-making has existed for decades, but unlike existing technology, AI does not allow for human input in decision-making. This risks staff not only deskilling but also losing their professional identity and sense of value (Strich et al., 2021). Less experienced health staff may rely heavily on AI, depriving them of the opportunity to develop critical thinking skills. Reliance on AI may also lead to automation complacency when clinicians cease looking for evidence and accept decisions made by the AI system without question (Ross & Spates, 2020). Moreover, the more complex the problem, the greater the likelihood of AI reliance, resulting in professional deskilling (Ross & Spates, 2020). By not questioning or reflecting on the decisions, biases in the AI decision-making process may go undetected (Strich et al., 2021), resulting in suboptimal patient outcomes. As AI becomes the norm, healthcare professionals risk developing a ‘completion mentality’, more focused on completing electronic forms than patient-centred care and looking to AI for diagnostic and treatment decisions rather than relying on their professional training and skills (Duran, 2021). This may contribute to a one-size-fits-all response, irrespective of the context. However, responsibility and accountability still reside with the health professional since AI currently cannot, for example, contextualise clinical judgements or develop therapeutic relationships with patients and their families. Nil. Nil. No conflict of interest has been declared by the authors. Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationBiomedical and Engineering EducationQuality and Safety in Healthcare
Volltext beim Verlag öffnen