Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
On hammer, nails, and researchers
0
Zitationen
1
Autoren
2024
Jahr
Abstract
In the last weeks of 2022 (30th November 2022 precisely), the large generative artificial intelligence (AI) tool—ChatGPT was launched and took the world by storm. This drew the immediate attention of the entire biomedical publication paradigm, as the unique capacity of the technology to generate texts based on specific inputs.[1] As with any new technology, most of curious biomedical researchers experimented with the new techniques. There was an exponential curiosity to play with all possible combinations and “prompts” that the technology offered.[2] Biomedical researchers were able to generate texts without them personally processing published information, developing complex ideas, or formulating theories, and few even attempted to generate data. All this happening in a matter of a few hours, even when lacking a deep understanding, clarity, and mastery of the subject. With the help of AI, even with superficial random ideas, researchers were able to simplify complex ideas for the reader. In this process, there was loss of academic integrity as well as inadvertent or intentional misconduct. There was a section of researchers that highlighted and celebrated the usefulness of AI, while others had their reservations.[3‐5] There were analyses that critically reviewed the strengths, weaknesses, opportunities, and threats that such a novel, versatile and free technology offered.[6] Sensing the threat that AI possess to scientific communications, journals started to impose several restrictions and guidelines.[7] This was largely to avoid the risk of abuse of the rising AI technology.[8] In addition, AI may cause a critical Dunning–Kruger effect that by largely hurting scientific progression.[9,10] Collectively, the overuse or overt relying on new technologies due to their availability or perceived benefits, commonly known as the "law of the instrument," can lead to unnecessary costs, pollute research, and may ultimately lead to suboptimal patient care. A responsible use of generative AI can play a crucial role in preventing such overutilization and guiding biomedical research toward more effective and efficient outcomes. Biomedical researchers shall utilize AI ethically (after declaration) to rapidly analyze vast amounts of medical literature, identifying trends, conflicting findings, and emerging evidence. This may help researchers make informed decisions about the appropriateness of new technologies. The inherent nature of AI models can perpetuate biases present in the data they are trained on. Authors should be transparent to the community and allow fellow researchers and clinicians to understand how they arrive at their conclusions using AI. Last but not the least, human oversight is essential to ensure that AI is used responsibly and ethically in the biomedical publication paradigm. When a hammer is new, its human nature to look for chance to hammer nails. But in case of generative AI tools in publication and research, it should be used very cautiously and transparently.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.