Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Improving Detection of ChatGPT-Generated Fake BioMedical Science Using Real Publication Text: Introducing xFakeBibs a Supervised-Learning Network Algorithm (Preprint)
1
Zitationen
2
Autoren
2023
Jahr
Abstract
<sec> <title>BACKGROUND</title> ChatGPT is becoming a new reality. Where do we go from here? </sec> <sec> <title>OBJECTIVE</title> is to show how we can distinguish ChatGPT-generated publications from counterparts produced by biomedical scientists. </sec> <sec> <title>METHODS</title> By means of a new algorithm, called xFakeBibs, we show the significant difference between ChatGPT-generated fake publications and real publications. Specifically, we triggered ChatGPT to generate 100 publications that were related to Alzheimer’s disease and comorbidity. Using the TF-IDF measure against a dataset of real publications, we constructed a network training model of the bigrams extracted from 100 publications. By 10-folds of 100 publications each, we built 10 calibrating networks to derive lower/upper bounds for classifying an article as real or fake. The final step of the algorithm is designed to test xFakeBibs against each of the ChatGPT-generated articles and predict its class. The xFakeBibs algorithm successfully assigned the POSITIVE label for real and NEGATIVE for fake ones. </sec> <sec> <title>RESULTS</title> When comparing the training model with the calibration models, we found that the similarities fluctuated between (19%-21%) of bigram overlaps. The calibrating folds contributed (51%-70%) of new bigrams, while ChatGPT contributed only 23% (> 50% of any of the other 10 calibrating folds). When classifying the individual articles, the xFakeBibs algorithm predicted 98/100 publications as fake, while 2 articles failed the test and were classified as real publications. </sec> <sec> <title>CONCLUSIONS</title> This work provided clear evidence on how to distinguish ChatGPT-generated articles from real articles. The analysis demonstrated how such contents are distinguishable in bulk. Also, the algorithmic approach demonstrated the detection the individual fake articles with a high degree of accuracy. However, it remains challenging to detect all fake records. ChatGPT may seem to be a useful tool, but it certainly presents a threat to our authentic knowledge and real science. This work is indeed a step in the right direction to counter fake science and misinformation. </sec> <sec> <title>CLINICALTRIAL</title> N/A </sec>
Ähnliche Arbeiten
The spread of true and false news online
2018 · 8.014 Zit.
What is Twitter, a social network or a news media?
2010 · 6.638 Zit.
Social Media and Fake News in the 2016 Election
2017 · 6.402 Zit.
Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children's understanding of deception
1983 · 6.253 Zit.
The Matthew Effect in Science
1968 · 6.133 Zit.