Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable Diagnostic Machine Learning Models with Feature Dependence: A Case Study in Smart Knee Implants
1
Zitationen
3
Autoren
2026
Jahr
Abstract
Machine learning (ML) is capable of aiding and improving medical diagnostics for a wide variety of pathologies. When used to process data from smart implantable medical devices, ML can offer timely and automated diagnostic tools to improve patient care. However, diagnostic ML models that influence healthcare decisions must have outputs that can be understood and trusted. Currently, ML-based medical diagnostic research focuses mainly on accuracy, lacking investigation of interpretability, explainability, and trust in the models — fundamental principles of diagnostic ML. To address this gap, this study seeks to improve explainability in ML models trained to diagnose aseptic tibial loosening in smart piezoelectric total knee replacements (TKRs). Specifically, a pathway to explainability is presented by applying the local interpretable model-agnostic explanations (LIME) method to interpret previously trained [Formula: see text]-nearest neighbor (KNN), support vector machine (SVM), and discriminant analysis (DA) ML models that classify cement damage from electromechanical impedance (EMI) signatures of piezoelectric-instrumented simulated TKRs. Two simple yet novel feature engineering techniques are proposed to align the models with domain knowledge of piezoelectric impedance-based structural health monitoring (SHM), thus improving explainability. These feature engineering techniques are broadly applicable to data types where adjacent features have inherent relations (i.e., time series, spectra, images, etc.). The original KNN, SVM, and DA models demonstrated explainability scores of 52%, 64.4%, and 42.1%, respectively. The first feature engineering technique improved the KNN and SVM scores to 84.6% and 87.6%, respectively, with DA falling to 29.5%. The second feature engineering technique improved the KNN and SVM scores to 74.5% and 81.1%, respectively, with DA falling to 33.5%. The pathway to explainability and the feature engineering techniques presented in this study yield models with improved explainability that are better aligned with domain knowledge, while either improving (SVM) or maintaining (KNN) their original accuracy.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.380 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.243 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.671 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.496 Zit.