Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
REFLECT: Tutorial on Reflecting on Bias in LLMs through Human-Centered Perspectives
1
Zitationen
3
Autoren
2026
Jahr
Abstract
Large Language Models (LLMs) increasingly shape how people access, produce, and reason with information. These models do not merely generate language, they mirror the data, discourse, and cognitive patterns on which they are trained. As a result, they often reflect and amplify existing social and cognitive biases, and even stereotypical representations, influencing what information is surfaced, how perspectives are represented, and which voices are privileged or silenced. Understanding these reflections requires going beyond technical bias detection toward examining how bias emerges in LLM outputs, how users perceive and react to it, and how design choices can reinforce or mitigate biased interactions. REFLECT offers a human-centered exploration of bias in LLMs, connecting perspectives from computer science, human–computer interaction, and cognitive psychology. REFLECT examines multiple ways in which bias manifests in LLMs (from selection effects to acquiescence and stereotypical associations) and discusses what these manifestations reveal about the interaction between human data, model training, and generative processes, also exploring interaction and design strategies that can help make such reflections visible and open to critical interpretation. Designed as a half-day session, REFLECT provides a concise yet reflective introduction to understanding bias as an emergent property of LLMs. By the end, participants will be equipped with conceptual and practical tools to identify and interpret how LLMs reflect biases, fostering more transparent, accountable, and trustworthy human–AI interactions.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.490 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.376 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.832 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.553 Zit.