Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AITurk: Using ChatGPT for Social Science Research
3
Zitationen
3
Autoren
2024
Jahr
Abstract
Artificial intelligence, especially large language models (LLMs), has been widely used for scientific research. Yet, few studies have explored their potential to advance social science research. This research evaluates how effectively ChatGPT can mimic responses from real human participants on online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk), Prolific, and CloudResearch. We replicated 22 studies published in top psychology journals between January 2023 and June 2023. Since ChatGPT 4.0’s cutoff date is September 2021, its training database does not include articles published after that time. The current research is among the first to use ChatGPT to replicate social science studies whose conclusions have not been included in the training database of ChatGPT. This unique methodology strengthens the credibility of our findings and establishes a more robust foundation for applying AI in simulating human behavior. The results show that ChatGPT successfully replicates about 93.2% (20.5/22) of the findings from these studies. While conducting these studies (assuming each study is a typical 5-minute online experiment with 300 participants) on online crowdsourcing platforms could take approximately 11 days and cost around $3,960, using AI through a platform we term “AITurk” could reduce the time to about 11 minutes and the cost to $132. That is, AITurk achieves about 93.2% accuracy of real human participants’ responses on online crowdsourcing platforms, with about 1/1440 time and 1/30 cost. Based on these findings, we suggest that ChatGPT can be an effective tool for social science research, especially for conducting preliminary research and evaluating the replicability of existing studies.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.336 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.227 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.114 Zit.