Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence in medical publishing: Peer or pretender?
1
Zitationen
1
Autoren
2023
Jahr
Abstract
There is nothing to write about what is being written about.[1] Technology is rapidly leaving itself behind. Anything being written about it today is already yesterday in the tech calendar. The curiosity and concerns about generative artificial intelligence (AI) in scientific publications seem more like a passing phase of socially appropriate behavior in the society of science writers and publishers. What is being written is about current technology and its possible influence on scientific publishing in the near future. Publishing medical literature is another matter altogether. It certainly tells nothing about the present and raises uncertainty about the future! All medical literature, as published now, is always about the past. Actually, the best in evidence-based medicine is always about the past, be it systematic review or a meta-analysis.[2] Longitudinal studies and large datasets, both of which, are accrued over long periods in the past, are considered most valued, and are statements about the past. In the present, only a conclusion is made on those observations. However that, evidently, does not seem to be the widespread worry. Concerns are being raised about how new technology will be misused in publishing and medical publishing for today’s discussion.[3] To understand that, let us first see what this new technology is and what is currently published. First, what is currently published? It starts with an objective, looks at observations, analyzes it for coherence (confirming old conclusions) or variance (raising newer possibilities), and ends with a conclusion which either brings a closure to the objective or leaves it open-ended. To be worthy of publication, ideally, a study, a hypothesis, or commentary must have some novel data, newer observation, or a different analytical view. What is being published, really, is not the ideal stuff in a large measure anyway, but let us overlook that. Now, let us see where current AI figures are in the process. I am tempted to believe that AI-2023, which has appeared in various avatars called ChatGPT, BARD, etc., has landed into a kind of catch-22 in so far as medical publishing is concerned – it knows the content without knowing the context! Stating more simply, the objectives of a study are neither determined by AI nor the clinical evidence generated by it. Its strength lies in the perusal of the relevant published literature more efficiently than a manual search, and it is smart at using software for statistical methods, like Statistical Package for the Social Sciences (SPSS, [IBM SPSS Statistics for Windows, IBM Corp., NY]). Since it is essential for authors to provide information about the statistical methods used anyway, authors can simply – and more importantly, lend strength to their study – state that so and so AI algorithms were used. In so far as conclusion is concerned, we know why IBM Watson remained Watson-second fiddle-and could not replace Holmes in deductive decision-making: “Failure to secure the cooperation of key stakeholders, notably doctors who were asked to improve the performance of AI but were undermined by claims that AI could outperform them.”[4] Unlike human authors and their peers who are susceptible to vanity regarding the correctness and value of their contribution, AI is more objective in acknowledging its shortcomings. There are a couple of areas although where, unsurprisingly, it beats humans. Let us look at them and then decide if that is a bane or a boon. The current tech of generative AI can tell how the past can replicate itself in the future without a present intervening. In other words, studies can be planned, and results, for instance, serious adverse events, predicted without conducting the study at all, using past data; such analysis could help abort clinical trials fated to fail or help design them better for faster results.[5] That should be a bonus not banal for medical research. Data are the real evidence. Any good evidence is as good if not better than plenty of weak evidence. What is currently published is not necessarily built on good data, even if real, simply because it lacks completeness. Moreover, more data are even better. AI can sift data better and crunch massive datasets and thereby give more precise results. In short, AI could help design and conduct meaningful research which translates into a better quality of medical literature. The second issue is about fake authorship which is different from plagiarism. AI can be made to write entire fictional studies. We know, however, that real studies have been published on fake data.[6] Hence, the real concern here is how the artificial will recognize the fake! However, that is an issue beyond publishing; it is about manipulation and morality. In times when the ability to decipher, a computer-generated CAPTCHA determines the real identity of the user, the reality of authorship could be determined by ticking (☑) “The Authors Are Human,” and a small print that says “CHAT-GPT/Etc. was used to help in searching medical literature and preparing the manuscript.” Publishers have clear guidelines on this.[7] In addition, AI can provide authors a preliminary review before submission as a kind of “prepeer-review screening”[8] to enhance the quality of the study and boost odds for acceptance. To publish anything which has clinical utility, demographic relevance, and is suitable for a change in practice, it should bring out something different, not the same thing differently. Unlike the basic code of current AI, which is pattern recognition, good medical literature of value is about breaking the pattern. While AI is a disruptive tool in reproduction, it is not creative enough, yet, to be intuitive and hypothesis-generating: It starts with a search and ends with a summary. It knows where to search and likely what therein but not why. That’s where peer review of the real kind stays where it is or where it should be. AI is going to make both writing and reviewing easier and faster and better but not peerless, not yet.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.312 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.169 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.564 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.466 Zit.