Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Balancing Act of Policies in Developing Machine Learning Explanations
0
Zitationen
1
Autoren
2025
Jahr
Abstract
Due to the nature of opaque machine learning (ML) models, software engineers and data scientists struggle to understand how ML models make decisions [1]. Explainability research aims to provide transparency for these models [2] through two types of explanations. Global explanations describe how a model works generally and provide insight into its accuracy, biases, and fairness. Local explanations describe individual predictions made by the model in specific use cases.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.311 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.