OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 11:10

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

LLM-Based Analysis of the AI Incident Database: Insights for AI Governance

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2025

Jahr

Abstract

Artificial Intelligence (AI) is increasingly adopted in critical sectors such as healthcare, finance, and public administration, where it promises significant gains in efficiency, automation, and decision support. At the same time, these systems expose societies to serious risks, including bias, discrimination, safety failures, and privacy infringements. To document and learn from such failures, the Artificial Intelligence Incident Database (AIID) was created as a community-driven repository and collective memory of AI harms, designed to support research, best practices, and governance. As of 2025, the AIID contains more than 1,100 incidents, yet its unstructured, narrative reports make systematic analysis difficult and limit their policy value. This paper addresses that challenge by applying a Large Language Model (LLM) pipeline, guided by the OECD AI Incident Reporting Framework, to transform AIID reports into structured data and enable systematic analysis of recurring patterns in AI incidents. The analysis reveals a sharp rise in frequency and severity since 2020, with human and economic harms dominating, transparency and fairness most frequently violated, and ICT, finance, and public administration accounting for most cases. In terms of implications, the study provides insights to better inform AI governance. It also demonstrates how to use LLMs to transform unstructured dataset into structured data for analysis.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AIExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen