OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 02:19

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Artificial intelligence in nursing education: Prospects and pitfalls

2024·15 Zitationen·Journal of Advanced NursingOpen Access
Volltext beim Verlag öffnen

15

Zitationen

3

Autoren

2024

Jahr

Abstract

Digital platforms and artificial intelligence's (AI) influence on our daily lives often go unnoticed. From the algorithms that support our smartphones and driverless vehicles to the medical diagnostic systems used by health professionals, AI is increasingly taking over decision-making tasks traditionally performed by humans. While enhancing human efficiency, this shift also introduces a myriad of ethical and legal uncertainties that demand our attention. These uncertainties encompass data security, information privacy, authenticity, transparency and integrity of AI-generated material and inherent AI biases (Cleary et al., 2024). AI systems draw on a vast corpus of data, synthesizing information from multiple sources, identifying complex patterns and relationships, and thus, have the potential to significantly impact nursing education and practice. Unlike traditional computer technology, AI has the capacity to think, learn, perceive, and make rational decisions without human intervention (Sousa et al., 2021). However, a recent study that compared the clinical decision-making of expert and student nurses to that of a generative AI platform showed that though the generative AI platform delivered responses faster than the nurses, it was less decisive and suggested unnecessary diagnostic tests when compared to the nurses (Saban & Dubovi, 2024). Generative AI platforms, like those used in the study above, utilize language models to analyse clinical scenarios and rapidly generate plausible responses (Saban & Dubovi, 2024). AI is here to stay and will continue to develop and gain traction regardless of whether the nursing profession accepts it or not (Archibald & Clark, 2023). In recognition of AI's influence, the Nursing and Artificial Intelligence Leadership Collaborative, a group of nursing and technology experts, recognized the need for the nursing profession to lead and drive how AI is used across the health system (Ronquillo et al., 2021). They identified the need for nurses to be actively involved in the way AI is developed and implemented to shape the future of health care (Ronquillo et al., 2021). This editorial focuses on the nursing education sector and the impact of rapidly evolving AI technologies. AI has applications in higher education settings, including nursing, and has the capacity to improve student learning opportunities, increase the efficiency of educators and enhance research capabilities (Carobene et al., 2023; Sousa et al., 2021). It has disrupted traditional learning and teaching methods, and has the potential to transform higher education (Archibald & Clark, 2023). Like any change, AI has had a mixed response in nurse education, with some embracing the advent of AI as nothing more than a helpful tool, and others having reservations about the authenticity and academic integrity of AI-generated use and outputs. In their discussion on the use of ChatGTP©, Archibald and Clark (2023) identified three responses to AI in nurse education; these were the avoidance stance, where AI is avoided and not addressed; the prohibition stance, where AI is positioned as a threat to academic integrity and therefore punishable; and the integration stance, where AI is integrated and embedded into educational processes and students are taught to use it appropriately. Increasingly, AI is being integrated across nursing education to free up time and resources, with institutions actively developing safeguards to support its responsible use and maintain the integrity of its courses. Nursing educators are often time-poor, with multiple responsibilities across teaching and learning, governance, industry partnerships, scholarly activities, leadership and information overload. Generative AI tools, like ChatGPT, can potentially assist, for example, by generating lesson plans, providing additional resources to create engaging learning material and developing and grading assessment items (Archibald & Clark, 2023). Manually grading students' work can be a laborious and time-consuming task. AI is reported as having the capacity to grade assessment items and provide student feedback (Sousa et al., 2021). However, the AI grading system is only as reliable as the data used to train it. Since the quality of the data, from which AI sources its information may not be comprehensive, reliable or evidence-based, generative AI tools may have inherent biases and compromise the AI-generated response (Cleary et al., 2024). There is, therefore, a risk of inherent bias in AI, and given the lack of transparency in the AI decision-making process, it may be difficult for the educator to assess the accuracy of the student feedback provided (Tundrea, 2020). This may bias student results. Ultimately, the educator remains responsible for assessing students' progress and awarding their grades, regardless of the technology used. Some literature has raised concerns that the widespread use of AI could potentially result in limiting scholarship, with crucial skills such as critical thinking, problem-solving and researching techniques being neglected (Cano et al., 2023; Tundrea, 2020). This can adversely impact the learning process and perpetuate misinformation (Cano et al., 2023). It is important that educators lead by example, supporting students in their understanding of AI's limitations and demonstrating, through the educator's own outputs, the importance of evidence-based practice and the transparent attribution of source citations to prepare students to use AI effectively and with integrity. Many higher education facilities have moved to asynchronous online learning. Artificial intelligence E-learning platforms can be customized to meet individual students' needs and provide one-on-one virtual teacher support, compensating for the differing needs within the student cohort (Tundrea, 2020). The AI system can monitor student group discussions, enabling the academic to assess student engagement and progress (Sousa et al., 2021). However, advanced AI systems can collect vast amounts of information on students' learning practices without their knowledge (Tundrea, 2020). These systems can differentiate between sensitive and non-sensitive data, deduce the user's emotions by their keyboard-typing patterns and determine their level of anxiety or confidence (Tundrea, 2020). Although this information may be used to support students, it raises potential concerns regarding confidentiality and privacy. Students and academics should be notified if such systems are employed, and the privacy risks should be disclosed. Using AI in assessment tasks raises concerns of academic integrity, plagiarism or even fraud if the AI system is not acknowledged as the source of information (Cano et al., 2023). Assessments are being modified across the sector to mitigate these concerns to ensure academic integrity and fairness in assessment. Educators can encourage critical thinking by developing student activities comparing AI-generated content with reliable peer-reviewed content (Cano et al., 2023). However, students may lack the depth of knowledge and insight to validate the veracity of the AI-generated work. Also, oral assessment items have been suggested to reduce the risk of AI input ( Cano et al., 2023). Unfortunately, student-recorded oral presentations can be labour-intensive to assess and are not foolproof since students can generate an AI transcript for an oral presentation and read the script verbatim. Across the nursing education sector, we are expected to contribute to new knowledge and publish our scholarship activities, which can prove challenging given our work commitments. It is, therefore, unsurprising that AI has found application in publishing activities, including formulating research questions and collecting, analysing and interpreting data sets (Carobene et al., 2023). Generative AI has the potential to foster innovations and has been used to co-author publications and modify manuscripts to fit journal guidelines to maximize the chance of acceptance (Carobene et al., 2023). In a proof-of-concept study, Májovský et al. (2023) used generative AI to create a convincing, fraudulent randomized control neurosurgery research article in approximately 1 h. The article was seemingly flawless except for some non-existent citations and semantic inaccuracies detected by subject experts. Hence, presently, while AI may be capable of automated writing, manuscripts will likely be flawed due in part to AI's training on non-peer-reviewed content. However, this may not be the case in the future. Despite the opportunity for misuse, AI has the potential to enhance the quality of research outputs when researchers adhere to ethical research practices regarding how AI is utilized (Cleary et al., 2024). Some peer-reviewed journals recognize the value of generative AI and accept its use. Wiley Author Services (2024) includes guidelines on the use of AI when compiling and reviewing manuscripts and requires authors to disclose their use of AI within their work. Authors are responsible for the accuracy of their manuscripts, and since AI raises questions of originality, integrity, and accountability, it cannot fulfil the authorship criteria (Carobene et al., 2023). Additionally, while AI's efficiency, precision, and creative abilities are superior to humans, reliance on such systems risks suppressing the development of the researcher's own critical thinking, independent reasoning skills and creativity (Carobene et al., 2023). Peer-reviewing manuscripts by experts in the field, including nursing academics, ensures research rigour and credibility. AI can potentially assist reviewers and editors by automating the screening of manuscripts, improving the quality of reviewers' feedback and flagging integrity and quality issues (Carobene et al., 2023; Wiley Author Services, 2024). However, Wiley Author Services (2024) warns that no part of a manuscript may be uploaded into a Generative AI tool during the review process since it may breach confidentiality and copyright laws. There is also the risk that reliance on AI may devalue the reviewing process due to a lack of transparency, thoroughness and integrity. The development of sophisticated plagiarism software has resulted in the retraction of many well-regarded publications in recent years. Also, there have been some high-profile cases of academic misconduct and scientific fraud due to the fabrication or misrepresentation of findings (Májovský et al., 2023). Currently, limited software is available to reliably detect AI-generated publications, including manipulated images. There is a risk that if members of a research team use AI, its use may not be reflected in current anti-plagiarism reports. However, technology is developing rapidly, and there is little doubt that software will soon be available to reliably detect AI-generated outputs. This will raise questions of authorship, authenticity, plagiarism and possible academic misconduct, which may result in the retraction of AI-assisted publications if the use of AI has not been accurately acknowledged. AI is here to stay, and current trends indicate that AI will likely become a dominant means of accessing and generating knowledge. There is a need for educators and educationalists to consider how best to guide students in the responsible use of AI, including its limitations and the appropriate acknowledgement of its use. Strong leadership is required in education to develop clear guidelines to support AI literacy. Nurse leaders, including those within the education sector, should consider change management strategies to support the integration and adoption of AI as it rapidly transforms the sector. Leaders can facilitate change management by clearly communicating change-related knowledge. Communication and leadership are the cornerstones of integrating AI in an informed way across nurse education to maintain academic integrity. Educators should lead by example and adhere to codes of ethics and moral standards when using AI in their scholarly activities, ensuring a rationale for its application and accurately disclosing its use. It is also important to be engaged and curious regarding new developments, including a continuous learning mindset to support AI fluency. None. No conflict of interest has been declared by the authors. Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

Ähnliche Arbeiten