OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 22.04.2026, 13:58

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Teaching Parrots to See Red: Self-Audits of Generative Language Models Overlook Sociotechnical Harms

2025·1 Zitationen·Proceedings of the AAAI Symposium SeriesOpen Access
Volltext beim Verlag öffnen

1

Zitationen

2

Autoren

2025

Jahr

Abstract

The release of ChatGPT as a “low-key research preview” and its viral growth spurred a gold rush among tech companies marketing generative AI (GenAI) as a universal tool. In 2023, the U.S. secured voluntary commitments from top AI developers, including OpenAI, Google, Meta, and Anthropic, to conduct self-audits ensuring model safety before release. However, these models exhibit widespread biases, including by race and gender, unjustly discriminating against users. To inspect this contradiction, we review ten corporate self-audits, finding a notable absence of real-world use cases in sectors like education, creative works, and public policy. Instead, audits focus on thwarting adversarial consumers in hypothetical scenarios and rely on GenAI models to approximate human impacts. This approach places consumers at risk by impairing the mitigation of representational, allocational, and quality-of-service harms. We conclude with recommendations to address audit gaps and protect GenAI consumers.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen