Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
''It Hasn’t Lived in Our Society”: Investigating Cultural Sensitivity in LLM Chatbots for Emotional Support
1
Zitationen
5
Autoren
2026
Jahr
Abstract
Large Language Models (LLMs) offer potential benefits for increasing access to digital well-being support, yet their application raises important questions about risks and responsible implementation. This paper examines a critical, often overlooked, dimension of LLM safety: cultural and social alignment in underrepresented contexts. We investigate how LLM-mediated emotional support can be adapted for a specific cultural setting, using Saudi Arabia as a case study. We present CSESC, a Culturally Sensitive Emotional Support Chatbot, developed as a technology probe to explore user perceptions of culturally sensitive responses. Our adaptation process was grounded in emotional support frameworks and guided by multicultural guidelines and local expertise. User evaluations demonstrate that cultural alignment enhances users’ sense of relatedness, while also surfacing tensions between empathy and sociocultural norms. We discuss the notion of “minimum cultural alignment,” contributing to HCI literature on culturally responsive LLM design and broadening the understanding of LLM safety.
Ähnliche Arbeiten
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller
1999 · 5.633 Zit.
An experiment in linguistic synthesis with a fuzzy logic controller
1975 · 5.583 Zit.
A FRAMEWORK FOR REPRESENTING KNOWLEDGE
1988 · 4.551 Zit.
Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
2023 · 3.431 Zit.