Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
In Which Ways Is Machine Learning Opaque?
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Machine learning models are often said to be opaque: They are supposed to lack transparency and are compared to black boxes. But in which ways are ML models opaque? This chapter aims to help answer this question. To establish a basis for my investigation, I first collect a few statements about the opacity of machine learning (ML) from working scientists and authors who comment on ML. The content of the statements can be summarized by saying that scientists face difficulties in reaching specific epistemic achievements regarding ML, e.g., they struggle to predict their outcomes accurately. This empirical basis is contrasted with a theoretical account, namely the well-known notion of epistemic opacity proposed by Paul Humphreys. My research question then is to what extent epistemic opacity à la Humphreys applies to ML and to what extent it can account for the problems that researchers discuss under the label “opacity of ML”. As a heuristic, an analogy with diseases proves helpful: The question is whether Humphreys-style opacity provides a good diagnosis for the symptoms discussed under the heading “opacity of ML”. My answer is that epistemic opacity à la Humphreys does apply to some forms of ML and that this is a partial, but not the only, cause for what people describe as opacity. There are other partial causes, for example, that researchers currently lack informative higher-level descriptions of ML models. This leaves us with two options to clarify the idea of opacity. We can either take “opacity” to be an umbrella term for the difficulties in reaching various achievements regarding ML. Alternatively, we can reserve the term for a more specific problem that stands out as a crucial factor in explaining why we struggle with these achievements.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.336 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.227 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.114 Zit.