OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 14:24

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The RANZCR Artificial Intelligence Committee: Position statement on autonomous AI

2024·0 Zitationen·Journal of Medical Imaging and Radiation OncologyOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2024

Jahr

Abstract

The Royal Australian and New Zealand College of Radiologists (RANZCR) established a Faculty of Clinical Radiology (CR) Artificial Intelligence Working Group in 2018, which was converted into the Artificial Intelligence Advisory Committee (AIC) in 2019. In 2020, the AIC was expanded to include Radiation Oncology (RO) members, subsequently reporting to both the CR and RO Councils on the implications of AI and machine learning for those disciplines. The objectives of the AIC include advising on appropriate regulation, safety measures and standards for AI devices, monitoring AI-related activities nationally and internationally, making recommendations on the respective Faculty training programmes, on patient safety implications, ethics of development and deployment of AI, and workforce implications. The committee is comprised of at least 10 appointed members (at least three from RO), including a trainee and a consumer. It typically meets four times per year by teleconference and once face-to-face. Several AI consensus documents have been published on the RANZCR website (www.ranzcr.com)—Standards of Practice, Ethical Principles, Regulation, COVID-19 and AI, and an international, multi-society paper on practical considerations of AI deployment in radiology.1 An aspect of AI, which is becoming increasingly available, is autonomous AI, whereby computer algorithms are enabled to function to a greater or lesser degree independently of human intervention. The risks to patients and to medical practice are self-evident. Accordingly, the AIC wishes to bring to the attention of the membership this College-approved Position Statement on autonomous AI. It has been subject to review by the College membership and multiple external stakeholder organisations. “The potential benefits of using artificial intelligence (AI) in medicine are substantial, particularly to streamline routine tasks, minimise clinician workload, automate quality improvement, replace less precise computer algorithms that are already in practice, perform administrative tasks, and assist with teaching and research. This position statement focuses on autonomous AI. For broader AI-related topics, please see the College website for separate statements on Standards of Practice, Ethical Principles, and Generative AI. Autonomous AI systems are designed to act with limited human guidance, are capable of performing a high volume of tasks and can process data faster than humans. Consequently, even a small error rate could affect a large number of patients. It is therefore imperative to exercise caution and maintain oversight when using these systems in clinical practice to ensure that patient care is not compromised. The use of these tools should be carefully considered in full cognizance of the clinical context and potential patient risk. A core tenet of medicine, embedded in the Hippocratic oath, is “primum non nocere” (do no harm). Machines have a different way of processing information and drawing conclusions than humans do – and humans can often not understand the ways that machines generate output. AI tools should not be permitted to operate autonomously if it cannot be ensured that they will function in line with established medical and ethical principles. In order to ensure that these autonomous AI systems are safe and effective when applied to patients, they must undergo rigorous testing prior to regulatory approval and use in clinical settings and should be assessed on their clinical outcomes. Models designed for diagnosis, treatment, or risk mitigation require preclinical evaluation, including external validation with datasets representing target populations to ensure the models' applicability to the population they will be deployed upon. To ensure appropriate human evaluation of the output is possible, autonomous AI should be designed to initiate actions that are transparent, identifiable, and discoverable both at the time the output is produced, and also in retrospect, in case of an error. In the medical setting, there are graduations in the extent to which autonomous AI may function independently.1, 2 Along the path from referral to diagnosis and to treatment, human oversight or review should take place. The level of autonomy granted to a device will relate to the degree of complexity and risk level involved in the task being undertaken. Lowest risk examples where autonomous AI may require little oversight might include image reconstruction, routine post-procedural check chest radiographs to exclude asymptomatic pneumothorax, or segmentation of brain in palliative whole brain radiation therapy for brain metastases. At the opposite extreme, reliance on autonomous AI alone for CT imaging in an acute abdomen, or brainstem segmentation for planning treatment of an infratentorial tumour could have fatal consequences in the event of error. Implementation of autonomous AI tools will require interoperability with existing systems, ongoing robust monitoring to ensure that the tools remain safe and appropriate (particularly if the tool involves elements of machine learning) and processes in place to ensure they can be rapidly disabled in the event of failure.2 Sites should undertake a risk assessment prior to utilising AI tools to assure that administrative and clinical workflows are not interrupted or in jeopardy. They should also consider the risk rating assigned to the AI tool by the regulator. While autonomous AI can improve efficiency and speed and potentially the accuracy of treatments, this must be carefully measured against the potential risks to patient outcomes. Ultimately, decisions about care must be made collaboratively between the health care professional and the patient taking into account the patient's presentation, history, treatment options and preferences.3” The data that support the findings of this study are available from the corresponding author upon reasonable request.

Ähnliche Arbeiten