Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AITurk: Using ChatGPT for Social Science Research
4
Zitationen
4
Autoren
2025
Jahr
Abstract
Artificial intelligence, especially large language models (LLMs), has been widely used for scientific research. Yet, few studies have explored their potential to advance social science research. This research evaluates how effectively ChatGPT can mimic responses from real human participants on online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk), Prolific, and CloudResearch. We replicated 22 studies published in top psychology journals between January 2023 and June 2023. Since ChatGPT 4.0’s cutoff date is September 2021, its training database does not include articles published after that time. The current research is among the first to use ChatGPT to replicate social science studies whose conclusions have not been included in the training database of ChatGPT. This unique methodology strengthens the credibility of our findings and establishes a more robust foundation for applying AI in simulating human behavior. The results show that ChatGPT successfully replicates about 93.2% (20.5/22) of the findings from these studies. While conducting these studies (assuming each study is a typical 5-minute online experiment with 300 participants) on online crowdsourcing platforms could take approximately 11 days and cost around $3,960, using AI through a platform we term “AITurk” could reduce the time to about 11 minutes and the cost to $132. That is, AITurk achieves about 93.2% accuracy of real human participants’ responses on online crowdsourcing platforms, with about 1/1440 time and 1/30 cost. Based on these findings, we suggest that ChatGPT can be an effective tool for social science research, especially for conducting preliminary research and evaluating the replicability of existing studies.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.