OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.05.2026, 10:42

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Personal Validation Effect in LLMs: Positive AI Responses Bias Perceptions of Validity, Reliability, Personalization, and Usefulness of Fictitious Predictions

2026·1 ZitationenOpen Access
Volltext beim Verlag öffnen

1

Zitationen

4

Autoren

2026

Jahr

Abstract

Large Language Models (LLMs) are becoming increasingly ubiquitous in daily life, impacting decision-making across various domains. A substantial body of prior work has shown that individuals tend to evaluate positive predictions more favorably than negative ones—a phenomenon often referred to as the personal validation effect—across various non-AI prediction sources. Building on this foundation, we extend this well-established psychological effect to the context of LLM-based predictions, examining how prediction valence influences users’ perceptions when the source is an AI system. We investigate how positive AI-generated responses affect perceived validity, personalization, reliability, and usefulness of chatbot predictions, even when those predictions are fictitious and pre-scripted. In a study of 238 participants, positive predictions were perceived as significantly more valid (36% increase), personalized (42% increase), reliable (27% increase), and useful (22% increase) than negative predictions. These findings demonstrate that the personal validation effect persists in interactions with LLMs and underscore the substantial role of prediction valence in shaping user perceptions, with important implications for the design and deployment of AI systems across diverse applications.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationAI in Service InteractionsEthics and Social Impacts of AI
Volltext beim Verlag öffnen