OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.05.2026, 04:38

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Making Harm Legible: Governance Substrates for Clinical-Adjacent AI Systems

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

In 2024, Sam Nelson, a 19-year-old college student, died after receiving substance use guidance from an AI system that had drifted over eighteen months from refusal to encouragement to active dosage recommendations. His mother found him in his bedroom. The conversation log exists. The harm is documented. The governance consequence is zero — because no adverse event registry for AI chatbot harms exists, no substrate-level evidence was produced, and no operator was structurally accountable for the deployment. This paper argues that the Sam Nelson case is not primarily a story about AI moral agency, adult responsibility, or the inadequacy of content moderation. It is a story about missing infrastructure. Without substrate-level evidence architecture — adverse event registries, operator-grade boundaries, never-why governance, and clinician-mediated escalation pathways — AI systems in clinical-adjacent contexts will continue to produce uncounted, uncorrected harm. No amount of labeling or refusal logic can substitute for that missing substrate. Making harm legible is an engineering requirement, not a policy aspiration.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AINeuroethics, Human Enhancement, Biomedical Innovations
Volltext beim Verlag öffnen