Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Personal Validation Effect in LLMs: Positive AI Responses Bias Perceptions of Validity, Reliability, Personalization, and Usefulness of Fictitious Predictions
1
Zitationen
4
Autoren
2026
Jahr
Abstract
Large Language Models (LLMs) are becoming increasingly ubiquitous in daily life, impacting decision-making across various domains. A substantial body of prior work has shown that individuals tend to evaluate positive predictions more favorably than negative ones—a phenomenon often referred to as the personal validation effect—across various non-AI prediction sources. Building on this foundation, we extend this well-established psychological effect to the context of LLM-based predictions, examining how prediction valence influences users’ perceptions when the source is an AI system. We investigate how positive AI-generated responses affect perceived validity, personalization, reliability, and usefulness of chatbot predictions, even when those predictions are fictitious and pre-scripted. In a study of 238 participants, positive predictions were perceived as significantly more valid (36% increase), personalized (42% increase), reliable (27% increase), and useful (22% increase) than negative predictions. These findings demonstrate that the personal validation effect persists in interactions with LLMs and underscore the substantial role of prediction valence in shaping user perceptions, with important implications for the design and deployment of AI systems across diverse applications.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.551 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.443 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.942 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.792 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.