Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Safeguarding Connected Health: Leveraging Trustworthy AI Techniques to Harden Intrusion Detection Systems Against Data Poisoning Threats in IoMT Environments
30
Zitationen
1
Autoren
2023
Jahr
Abstract
Internet of Medical Things (IoMT) environments introduce vast security exposures including vulnerabilities to data poisoning threats that undermine integrity of automated patient health analytics like diagnosis models. This research explores applying trustworthy artificial intelligence (AI) methodologies including explainability, bias mitigation, and adversarial sample detection to substantially enhance resilience of medical intrusion detection systems. We architect an integrated anomaly detector featuring purpose-built modules for model interpretability, bias quantification, and advanced malicious input recognition alongside conventional classifier pipelines. Additional infrastructure provides full-lifecycle accountability via independent auditing. Our experimental intrusion detection system design embodying multiple trustworthy AI principles is rigorously evaluated against staged electronic record poisoning attacks emulating realistic threats to healthcare IoMT ecosystems spanning wearables, edge devices, and hospital information systems. Results demonstrate significantly strengthened threat response capabilities versus baseline detectors lacking safeguards. Explainability mechanisms build justified trust in model behaviors by surfacing rationale for each prediction to human operators. Continuous bias tracking enables preemptively identifying and mitigating unfair performance gaps before they widen into operational exposures over time. SafeML classifiers reliably detect even camouflaged data manipulation attempts with 97% accuracy. Together the integrated modules restore classification performance to baseline levels even when overwhelmed with 30% contaminated data across all samples. Findings strongly motivate prioritizing adoption of ethical ML practices to fulfill duty of care around patient safety and data integrity as algorithmic capabilities advance.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.327 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.399 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.297 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.274 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.492 Zit.