Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Empowering AI-Driven Healthcare With Secure, Decentralized, and Privacy-Enhancing Adaptive Intelligence
2
Zitationen
4
Autoren
2025
Jahr
Abstract
Integrating the Internet of Medical Things (IoMT) and artificial intelligence (AI) is revolutionizing healthcare by enabling real-time health monitoring, predictive analytics, and personalized treatment. However, existing AI healthcare models are trained offline on static datasets, making them less adaptable to evolving health data and potentially reducing their accuracy and decision-making. Furthermore, adversaries may exploit this by injecting frequent data shifts, straining healthcare resources. Privacy concerns also arise from the exposure of sensitive patient data. Therefore, we propose a novel AI-driven healthcare methodology with secure, decentralized, and privacy-enhancing adaptive intelligence. First, a deep learning (DL) model is devised to leverage its high-confidence probability to detect data drift efficiently. Next, we propose a privacy-preserving approach leveraging functional encryption to ensure patient data confidentiality during drift detection and model retraining while eliminating reliance on a trusted entity. Lastly, we propose a customized consortium blockchain with group signatures to protect patient anonymity and data tampering and unlinkability while preventing falsely claiming drift incidents. Moreover, to ensure decentralization, it removes the need for a trusted authority in cryptographic key generation. Our experiments, on a real testbed and healthcare datasets, show that the proposed methodology achieves real-time drift detection with performance comparable to existing methods, while reducing the computational time by 52.35%. It also maintains high accuracy, achieving up to 98.43% with the offline health monitoring model and up to 96% with the online adaptive model. Additionally, it preserves patient privacy while reducing computational and communication overhead by 94.26% and 89%, respectively, compared to the state-of-the-art.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.496 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.386 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.848 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.562 Zit.