Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Developing Critical AI Language Literacy—prompting experiments on raciolinguistic bias to understand large language models as cultural artefacts
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Abstract Understanding how AI and constructions of race intersect and which new ethical dilemmas arise in this context has become pressing. In this article, we introduce a university pedagogical project centred on collective explorations of raciolinguistic biases in Large Language Models. The project aimed at the development of Critical AI Language Literacy. It approached raciolinguistic biases from linguistic anthropological perspectives, where ‘race’ is understood as a linguistic and discursive construction (Alim et al. 2020). To inspect whether and how raciolinguistic biases are to be found in Large Language Models, a group of graduate students collected output of ChatGPT in different languages and subsequently engaged in discussions around the output’s potentially racially biased language or content. Prompts were entered by ten different individuals, in different ChatGPT accounts and in different languages. The results of these prompting experiments contributed to discussions on the factors that may influence the presence and degree of biased output in ChatGPT. These include the data set, the use history of the account, choice of language or language variety, and explicit sensitivity of the topic. The possible impact of debiasing techniques was also discussed. The project helped students to understand that LLMs are not neutral technologies but cultural artefacts in which biased data and cultural histories are embedded. We argue that the approach has the potential to support a critical awareness in LLM use and to therefore foster Critical AI Language Literacy.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.