Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Understanding Machine Learning Explainability Models in the context of Pancreatic Cancer Treatment
2
Zitationen
6
Autoren
2023
Jahr
Abstract
The increasing adoption of artificial intelligent systems at sensitive domains where humans are particularly, such as medicine, has provided the context to deeply explore ways of making machine learning models (ML) understandable for their final users. The success of such systems require the trust of their users, and thus there is a need to design and provide methods to understand the decisions made by such systems. We start from a public Pancreatic Cancer dataset and experiment with different ML models on a diagnosis scenario with the goal to decide whether a patient should be prescribed with a chemotherapy treatment. To validate the diagnosis results we explore different explainability approaches: Decision Tree, Random Forest, and model agnostic ad-hoc models, and compare them against a standard Pancreatic Cancer treatment set of rules. The increasing adoption of artificial intelligent systems at sensitive domains where humans are particularly, such as medicine, has provided the context to deeply explore ways of making machine learning models (ML) understandable for their final users. The success of such systems require the trust of their users, and thus there is a need to design and provide methods to understand the decisions made by such systems. We start from a public Pancreatic Cancer dataset and experiment with different ML models. To validate the diagnostic results we explore different explainability approaches: Decision Tree based approach, Random Forest based approach, and different model agnostic ad-hoc approaches, and we compare them against a standard Pancreatic Cancer treatment set of rules
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.336 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.227 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.114 Zit.