OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 04.04.2026, 05:50

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainable Diagnostic Machine Learning Models with Feature Dependence: A Case Study in Smart Knee Implants

2026·1 Zitationen·International Journal of Artificial Intelligence Tools
Volltext beim Verlag öffnen

1

Zitationen

3

Autoren

2026

Jahr

Abstract

Machine learning (ML) is capable of aiding and improving medical diagnostics for a wide variety of pathologies. When used to process data from smart implantable medical devices, ML can offer timely and automated diagnostic tools to improve patient care. However, diagnostic ML models that influence healthcare decisions must have outputs that can be understood and trusted. Currently, ML-based medical diagnostic research focuses mainly on accuracy, lacking investigation of interpretability, explainability, and trust in the models — fundamental principles of diagnostic ML. To address this gap, this study seeks to improve explainability in ML models trained to diagnose aseptic tibial loosening in smart piezoelectric total knee replacements (TKRs). Specifically, a pathway to explainability is presented by applying the local interpretable model-agnostic explanations (LIME) method to interpret previously trained [Formula: see text]-nearest neighbor (KNN), support vector machine (SVM), and discriminant analysis (DA) ML models that classify cement damage from electromechanical impedance (EMI) signatures of piezoelectric-instrumented simulated TKRs. Two simple yet novel feature engineering techniques are proposed to align the models with domain knowledge of piezoelectric impedance-based structural health monitoring (SHM), thus improving explainability. These feature engineering techniques are broadly applicable to data types where adjacent features have inherent relations (i.e., time series, spectra, images, etc.). The original KNN, SVM, and DA models demonstrated explainability scores of 52%, 64.4%, and 42.1%, respectively. The first feature engineering technique improved the KNN and SVM scores to 84.6% and 87.6%, respectively, with DA falling to 29.5%. The second feature engineering technique improved the KNN and SVM scores to 74.5% and 81.1%, respectively, with DA falling to 33.5%. The pathway to explainability and the feature engineering techniques presented in this study yield models with improved explainability that are better aligned with domain knowledge, while either improving (SVM) or maintaining (KNN) their original accuracy.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Total Knee Arthroplasty Outcomes
Volltext beim Verlag öffnen