OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.03.2026, 01:14

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Snake Oil or Panacea? How to Misuse AI in Scientific Inquiries of the Human Mind

2026·0 Zitationen·Behavioral SciencesOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2026

Jahr

Abstract

Large language models (LLMs) are increasingly used to predict human behavior from plain-text descriptions of experimental tasks that range from judging disease severity to consequential medical decisions. While these methods promise quick insights without complex psychological theories, we reveal a critical flaw: they often latch onto accidental patterns in the data that seem predictive but collapse when faced with novel experimental conditions. Testing across multiple behavioral studies, we show these models can generate wildly inaccurate predictions, sometimes even reversing true relationships, when applied beyond their training context. Standard validation techniques miss this flaw, creating false confidence in their reliability. We introduce a simple diagnostic tool to spot these failures and urge researchers to prioritize theoretical grounding over statistical convenience. Without this, LLM-driven behavioral predictions risk being scientifically meaningless, despite impressive initial results.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMental Health via WritingExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen