OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 20.04.2026, 20:42

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

REFLECT: Tutorial on Reflecting on Bias in LLMs through Human-Centered Perspectives

2026·1 ZitationenOpen Access
Volltext beim Verlag öffnen

1

Zitationen

3

Autoren

2026

Jahr

Abstract

Large Language Models (LLMs) increasingly shape how people access, produce, and reason with information. These models do not merely generate language, they mirror the data, discourse, and cognitive patterns on which they are trained. As a result, they often reflect and amplify existing social and cognitive biases, and even stereotypical representations, influencing what information is surfaced, how perspectives are represented, and which voices are privileged or silenced. Understanding these reflections requires going beyond technical bias detection toward examining how bias emerges in LLM outputs, how users perceive and react to it, and how design choices can reinforce or mitigate biased interactions. REFLECT offers a human-centered exploration of bias in LLMs, connecting perspectives from computer science, human–computer interaction, and cognitive psychology. REFLECT examines multiple ways in which bias manifests in LLMs (from selection effects to acquiescence and stereotypical associations) and discusses what these manifestations reveal about the interaction between human data, model training, and generative processes, also exploring interaction and design strategies that can help make such reflections visible and open to critical interpretation. Designed as a half-day session, REFLECT provides a concise yet reflective introduction to understanding bias as an emergent property of LLMs. By the end, participants will be equipped with conceptual and practical tools to identify and interpret how LLMs reflect biases, fostering more transparent, accountable, and trustworthy human–AI interactions.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AIExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen