OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 00:38

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Powered by <scp>AI</scp>: advancing towards artificial intelligence algorithms in Australian hospital pharmacy

2024·1 Zitationen·Journal of Pharmacy Practice and ResearchOpen Access
Volltext beim Verlag öffnen

1

Zitationen

3

Autoren

2024

Jahr

Abstract

Imagine hospitals where clinicians can quickly and accurately identify patients at risk of medication harm and why. This is what artificial intelligence (AI) promises, and it’s closer than we think. While the past decade brought electronic health records (EHRs) and decision support systems, AI-enabled machine learning (ML) prediction models and large language models have emerged, with the potential to greatly assist clinical decision-making and improve patient outcomes. For example, AI can predict optimal doses of pharmacokinetically complex medications1 and identify adverse drug reactions among coded discharge data. These new tools can support busy pharmacists by automating tedious tasks and discerning clinical scenarios warranting pharmacist intervention. This editorial highlights considerations relating to AI/ML technologies applied to medicine management in Australian hospitals, drawing insights from local experience in designing and evaluating a ML dosing algorithm for unfractionated heparin (UFH). Risk prediction algorithms are common, such as the CHADS-VASc and HAS-BLED scores, developed using conventional statistical (regression) methods. But with the availability of ‘big data’ from EHRs within multiple hospitals, clinician researchers, data scientists, and informaticians can now collaborate to develop more accurate real-time predictive algorithms using AI/ML. Some examples include predicting an individual’s likelihood of a medication-related hospital readmission, suffering a bleed with anticoagulant therapy, or rapid deterioration due to undertreated illness. Detecting and treating these conditions can optimise patient outcomes. Using large datasets to develop models simply because the data are available, without first defining a clear and useful clinical purpose, will generate tools irrelevant for improving diagnostic or treatment recommendations. Models must instead deal with everyday situations that clinicians, working in multidisciplinary teams, find problematic. In considering a UFH dosing algorithm, our team found that, using traditional weight-based dosing nomograms, only 30% of patients achieved therapeutic range at 36 h after infusion initiation. In the planning stages, pharmacists, internal medicine physicians, haematologists, clinical pharmacologists, academic researchers, and informatics experts came together to create a multisite, multidisciplinary expert group for co-designing the research question, study protocol, data collection, and model development. A key challenge with AI-derived algorithms is gaining the trust of end-users who may perceive them as ‘black boxes’ in understanding how predictions are generated.2 For pharmacists, this lack of transparency and explainability is concerning, especially when research reveals AI-related medication errors. Pharmacists must engage in model co-design and evaluation, develop skills in information technology and computer science, participate in user testing of human-machine interfaces, and monitor AI performance for impacts on process efficiency and clinical outcomes. Training in digital health and AI competencies must be incorporated into pharmacy undergraduate curricula and postgraduate courses.3 Pharmacy informaticians must collaborate with pedagogical experts within universities, the Australian Pharmacy Council, and the Australian Digital Health Agency in developing foundational AI courses. Training datasets for AI/ML models must not be biased or unrepresentative; otherwise, models may perpetuate or accentuate existing healthcare disparities. Data biased according to race, gender, or socioeconomic status can generate inaccurate models that adversely impact marginalised or minority populations. All involved must recognise most models are only truly generalisable to patient cohorts from which training data were derived, with model recalibration usually required when applied to new populations. We developed the UFH dosing algorithm using EHR data from one health district and validated it using EHR data from another health district. Bias can be further minimised if model development and validation follow, as we did, best practice guidelines and reporting standards (such as TRIPOD-AI4 and PROBAST-AI5) and align with ethical and responsible use frameworks for AI. While developing models in silico using static training datasets is relatively straightforward, significant implementation challenges arise in the ‘last mile’ of translating models into dynamic, real-world environments. This is where models — integrated into bolt-on apps or embedded directly within EHRs — must be able to operate and interact seamlessly within existing digital systems and clinical workflows. How this model-cum-tool is then adopted and used by pharmacists and others depends on adequate end-user testing and training and optimisation of user interfaces, display formats, and ergonomic design. For our UFH dosing algorithm, our team is configuring it for use in an app and applying human-centred design principles to build a digital prototype for feasibility and acceptance testing. We will also be guided by the recently published SALIENT framework outlining the what (key components), when (stages), and how (tasks) of successful AI implementation, as well as the who (organisation) and why (policy).6 Using large datasets of sensitive patient information to train models poses risks of unauthorised access and breaches of confidentiality, requiring robust data governance and protection measures to be established.7 Approval to deploy the app in routine care will require clinical trials demonstrating its efficacy and safety and satisfying the software as a medical device provisions of the Therapeutic Goods Administration. In realising the benefits of AI/ML, a comprehensive and coordinated national AI in healthcare strategy needs to be adopted by multiple stakeholders (including industry, health services, and academia) across multiple states and territories. The ultimate question is whether AI tools enable clinicians to work smarter and more efficiently, save healthcare costs, and render patient care more effective and safe. Machines don’t tire and are not influenced by emotions, and they can learn and process vast amounts of information faster and more accurately than humans. But human oversight and judgement remains crucial in ensuring the appropriate design and use of algorithms and monitoring their performance. Machines exist to augment, not usurp, clinician decision-making, empowering pharmacists to focus more on empathic patient interactions, education, and counselling and fostering interprofessional healthcare delivery; integral care components for which no machine can substitute. The future of hospital pharmacy is undeniably intertwined with the evolution of AI, and we should embrace and lead the agenda in using them as supportive tools to enhance our clinical practice. None. The authors declare that they have no conflicts of interest. Conceptualisation: NF, IS, MB. Investigation: NF. Writing — original draft: NF, IS, MB. Writing — review and editing: NF, IS, MB. Ethical approval was not required for this editorial as it did not contain any human data or participants. Not commissioned, not externally peer reviewed. This editorial received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. Data sharing not applicable to this editorial as no datasets were generated or analysed.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationQuality and Safety in HealthcareArtificial Intelligence in Healthcare
Volltext beim Verlag öffnen