Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence in Biomedical Research and Publications: It is not about Good or Evil but about its Ethical Use
5
Zitationen
3
Autoren
2024
Jahr
Abstract
“From inability to let well alone, from too much zeal for the new and contempt for what is old, from putting knowledge before wisdom, science before art and cleverness before common sense, from treating patients as cases and from making the cure of the disease more grievous than the endurance of the same, good Lord deliver us.” Sir Robert Hutchison (1871 – 1960) The physician’s prayer We are all at different stages of learning, excitement, and usage of the rapidly developing field of artificial intelligence (AI). With increasingly refined algorithms and related applications, AI is progressively getting integrated into contemporary clinical settings as well as research. The inspiration for this editorial was a flyer for an AI event which announced a session: “generate review of literature for thesis using latest AI technologies.” Should that sound as a red flag, or is it one more skilling imperative? We place some of our thoughts here for public health students, researchers, practitioners, and policy makers to engage on this subject and debate the emerging dilemmas. We also need to be aware of the guidelines, regulations, and ethics dimensions of the use of AI in research and publication. We discuss the available national and journal guidelines and point out some gray areas as food for thought for the community medicine and public health fraternity in India. Autonomous AI is a set of advanced system and tools that can perform tasks ranging from basic repetitive activities to analyzing data with limited human oversight and involvement.[1] Debates on addressing human goals for making appropriate autonomy and agency attributions in relation to AI systems revolve around multiple axes including dimensions of ethics and philosophy of action.[2] The Indian Council of Medical Research (ICMR) published the ethical guidelines for application of AI in biomedical research and healthcare in 2023 to provide clarity on liability, transparency, accountability, and oversight.[3] While discussing its potential in improving individual and population health, it cautions us about the ethical, legal, and social concerns in the light of the complexities of algorithmic learning. The ten overarching principles are descriptive and not prescriptive. These are patient or research participant centric, and if followed, can guide stakeholders in using it responsibly. Robust mechanisms that ensure adequate safeguards against stigmatization, discrimination and worsening of vulnerabilities along with secure software designs that minimize risk, maintain privacy and confidentiality of sensitive data are the basis for ethical use of AI in biomedical research. A key concern is trustworthiness, that can only be guaranteed by valid, reliable, and lawful technology which can be reasonably explained by scientific plausibility and transparency. The principle of data privacy includes appropriate security measures, anonymization, and necessary ethical clearance and consent process when using individual data for AI-based algorithms. Other principles include accountability and liability, data quality (free from bias and representative of the target population), accessibility, equity, validity, and inclusiveness in the spirit of collaboration, fairness, and nondiscrimination. These ten principles provide a framework for the entire spectrum of stakeholders—researchers, innovators, product developers, patients, technologists, healthcare professionals, ethicists, and funding agencies. Although these principles are useful in understanding the ethical use of AI in the domain of biomedical science, an emergent aspect is the use of AI in publication practices that needs to be urgently recognized by the scientific community. These issues found a center stage when some scientific publications listed Chat GPT as co-authors.[4] The usage of AI is limited not only to the authors but also extends to its potential use by reviewers and editors. The World Association of Medical Editors (WAME),[5] Committee on Publication Ethics (COPE),[6] and International Committee of Medical Journal Editors (ICMJE)[7] issued statements that AI tools cannot be listed as authors. This was because they do not meet authorship criteria; cannot declare conflict of interest; cannot be made responsible for accuracy, integrity, and originality of the work; and of course do not have the agency to sign or consent to copyright and license agreements. The authors should be transparent and disclose any use of AI in the production of the submitted work in the cover letter in terms of its methodology and acknowledge as appropriate. The final responsibility of materials generated by the AI tools rests with the authors and merits full disclosure. WAME and ICMJE guidelines state that editors and editorial staff must be aware that the use of AI technology in processing manuscripts may violate confidentiality. Like the authors, they must also disclose use of such technology in evaluation and generation of reviews and correspondence since AI technologies lack in critical thinking and original assessment and therefore generate incorrect, incomplete, and biased conclusions about manuscripts. Biases may creep into AI algorithms through training data that includes inherently biased human decision making or simply reflect historical or social inequities due to flawed data sampling.[8] By uploading a submitted document into a generative AI tool, there is a real danger of breach in data privacy rights wherever there is personal identifiable information. More and more publishers now have clearly stated AI policy on their websites. However, as the WAME recommends, the editors too need appropriate tools to help them detect content that is generated or altered by AI. This is a significant barrier for regional and society journals that do not have adequate resources for access to such technologies and leads us to a larger and ever-widening chasm of “haves and have nots” in the publishing world. The earlier guidelines of ICMJE recommended disclosure by the reviewers if any AI technology was used to facilitate the review. The revised guidelines of 2024 require the reviewers to obtain prior permission from the journal for using this technology. Some journals consider unpublished manuscripts as confidential documents, and any use of AI may lead to violation of authors’ confidentiality and propriety rights or data and privacy breaches wherever there are personal identifiers. The use of AI in improving language and readability of the peer review report remains a gray area. At the time of writing this editorial, two of the 10 large publishers (Elsevier and Taylor and Francis) clarify the use of AI tools by reviewers.[9] The former focuses on data security, confidentiality, and privacy; it does not prohibit the use of generative AI to produce the content of peer-reviewed report, while the latter is categorical that these tools are not to be used in manuscript review reports. Significantly, guidelines on handling any violations seem to be missing. Based on the current review of the ICMJE, COPE, WAME, and the ICMR guidelines, we call for a wider discussion and engagement of the Indian academia in this evolving discourse. We need to be mindful that unethical and unchecked use of AI can jeopardize our academic rigor and creative capabilities. For a beginning, academic institutions need to incorporate this important component in their research methodology workshops, institutional publishing policy, and research output (including theses/dissertations) by adhering to the available guidelines. Members of institutional review boards and ethics committees need to be updated too. AI in 2024 is considered to be in the stage of evolution that the internet was in the 1990s, a technology that is all set to be integral to our academic lives.[10] The bottom line: treat tools as tools with a primacy of the “human in loop” approach[11] to limit reliance on generated content (disclosing wherever needed) while not being caught unawares by being ignorant about the technology. Post script: The preparation of this manuscript overlapped with the Paris Olympics 2024. Yusuf Dikec, the sensational Turkish silver medalist in shooting has posed a question whether “future robots can win medals at the Olympics.”[12] Time will tell.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.