Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Enhanced guidance on artificial intelligence for medical publication and communication professionals
1
Zitationen
10
Autoren
2025
Jahr
Abstract
The International Society for Medical Publication Professionals (ISMPP) position statement and call to action on the use of artificial intelligence (AI), published in 2024, recognized the value of AI while advocating for best practices to guide its use. In this commentary, we offer enhanced guidance on the call to action for ISMPP members and other medical communication professionals on the topics of education and training, implementation and use, and advocacy and community engagement. With AI rapidly revolutionizing scientific communication, members should stay up to date with advancements in the field by completing AI training courses, engaging with ISMPP AI education and training and other external training platforms, developing a practice of lifelong learning, and improving AI literacy. Members can successfully integrate and use AI by complying with organizational policies, ensuring fair access to AI models, complying with authorship guidance, properly disclosing the use of AI models or tools, respecting academic integrity and copyright restrictions, and understanding privacy protections. Members also need to be familiar with the systemic problem of bias with large language models, which can reinforce health inequities, as well as the limits of transparency and explainability with AI models, which can undermine source verification, bias detection, and even scientific integrity. AI models can produce hallucinations, results that are factually incorrect, irrelevant, or nonsensical, which is why all outputs from AI models should be reviewed and verified for accuracy by humans. With respect to advocacy and community engagement, members should advocate for the responsible use of AI, participate in developing AI policy and governance, work with underserved communities to get access to AI tools, and share findings for AI use cases or research results in peer-reviewed journals, conferences, and other professional platforms.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.
Autoren
Institutionen
- AbbVie (United States)(US)
- Office of the Chief Scientist(IL)
- Prime Minister's Office(BN)
- Public Risk Management Association(US)
- General Department of Preventive Medicine(VN)
- Salus (United States)(US)
- Pfizer (United States)(US)
- Inflammation Research Foundation(US)
- Madrigal Pharmaceuticals (United States)(US)
- Real Prevention (United States)(US)
- Envision Education(US)
- Central Intelligence Agency(US)