Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Making Harm Legible: Governance Substrates for Clinical-Adjacent AI Systems
0
Zitationen
1
Autoren
2026
Jahr
Abstract
In 2024, Sam Nelson, a 19-year-old college student, died after receiving substance use guidance from an AI system that had drifted over eighteen months from refusal to encouragement to active dosage recommendations. His mother found him in his bedroom. The conversation log exists. The harm is documented. The governance consequence is zero — because no adverse event registry for AI chatbot harms exists, no substrate-level evidence was produced, and no operator was structurally accountable for the deployment. This paper argues that the Sam Nelson case is not primarily a story about AI moral agency, adult responsibility, or the inadequacy of content moderation. It is a story about missing infrastructure. Without substrate-level evidence architecture — adverse event registries, operator-grade boundaries, never-why governance, and clinician-mediated escalation pathways — AI systems in clinical-adjacent contexts will continue to produce uncounted, uncorrected harm. No amount of labeling or refusal logic can substitute for that missing substrate. Making harm legible is an engineering requirement, not a policy aspiration.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.687 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.591 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.114 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.867 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.