OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 11.03.2026, 08:35

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Stop Explaining Black Box Machine Learning Models for High Stakes\n Decisions and Use Interpretable Models Instead

2018·23 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen

23

Zitationen

1

Autoren

2018

Jahr

Abstract

Black box machine learning models are currently being used for high stakes\ndecision-making throughout society, causing problems throughout healthcare,\ncriminal justice, and in other domains. People have hoped that creating methods\nfor explaining these black box models will alleviate some of these problems,\nbut trying to \\textit{explain} black box models, rather than creating models\nthat are \\textit{interpretable} in the first place, is likely to perpetuate bad\npractices and can potentially cause catastrophic harm to society. There is a\nway forward -- it is to design models that are inherently interpretable. This\nmanuscript clarifies the chasm between explaining black boxes and using\ninherently interpretable models, outlines several key reasons why explainable\nblack boxes should be avoided in high-stakes decisions, identifies challenges\nto interpretable machine learning, and provides several example applications\nwhere interpretable models could potentially replace black box models in\ncriminal justice, healthcare, and computer vision.\n

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen