OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 31.03.2026, 03:43

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Emotional Framing in Prompts Modulates Large Language Model Performance

2026·0 Zitationen·Big Data and Cognitive ComputingOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2026

Jahr

Abstract

Large Language Models (LLMs) demonstrate remarkable performance across a variety of natural language understanding tasks, yet their sensitivity to emotional framing in user prompts remains underexplored. This paper presents an empirical study investigating how four emotional tones—joy, apathy, anger, and fear—affect LLM performance on the SuperGLUE benchmark. We evaluate five instruction-tuned, open-weight models across eight diverse tasks, systematically modulating input prompts with affective cues while keeping semantic content constant. Results reveal that prompts framed with joy and apathy lead to consistently higher accuracy, with gains of up to 4.5 percentage points compared to fear-framed inputs, which yield the lowest performance. These findings demonstrate that affective modulation in user prompts measurably impacts LLM reasoning and task outcomes, suggesting that emotional framing is not merely stylistic but functionally relevant to model behavior. Our study provides a reproducible experimental framework and an open-source prompt set, offering a foundation for future research on affect-aware prompting strategies and their implications in human–AI interaction.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationTopic ModelingExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen