Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Trained AI as “Experiment Participants”: theory and practice
1
Zitationen
3
Autoren
2025
Jahr
Abstract
Abstract This study introduces an innovative methodology that employs Artificial Intelligence (AI) to replace the functional roles of experimental participants, addressing key challenges in behavioral economics and economic psychology. Traditional experiments often face logistical difficulties, such as the high cost of recruiting large-scale participants and the inability to model idealized experimental conditions. By leveraging AI, we create functional baselines that simulate participant groups under specific conditions, such as bias-free or rational benchmarks, making otherwise challenging experiments feasible. To demonstrate this approach, we conducted a sample study investigating gender-based confirmation bias in academic publishing. We trained AI models on text-based features extracted from thousands of economics papers while intentionally excluding explicit gender information. The AI simulated a "bias-free" benchmark, predicting journal rankings solely based on paper content. By comparing these predictions to real-world outcomes, we identified significant discrepancies: female-authored papers underperformed relative to AI predictions, while male-authored papers performed better. Our findings reveal the persistence of gender-based confirmation bias in peer review, despite controlling for content quality. By simulating the functional role of participants using simple text-based models, we demonstrate how computational tools can serve as rational, bias-free benchmarks in experimental design. Although the current model uses a basic bag-of-words approach, it illustrates a broader methodological vision: AI-driven simulations that can eventually emulate human reasoning with greater complexity and modality.
Ähnliche Arbeiten
UCSF Chimera—A visualization system for exploratory research and analysis
2004 · 47.072 Zit.
SciPy 1.0: fundamental algorithms for scientific computing in Python
2020 · 35.796 Zit.
Clustal W and Clustal X version 2.0
2007 · 28.887 Zit.
The REDCap consortium: Building an international community of software platform partners
2019 · 22.780 Zit.
Array programming with NumPy
2020 · 20.778 Zit.