Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI-Based Doctor Agent for Early Detection of Alzheimer's Applying Hybrid Machine-Learning Techniques
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Alzheimer's disease (AD) is a progressive neurodegenerative disorder characterized by memory loss, cognitive decline, and irreversible brain death, rendering it a global health concern. Even though early detection using machine learning (ML) has gotten better, there are still problems with expensive neuroimaging, limited multimodal data integration, small homogeneous cohorts, and models that are hard to understand. This study proposes an AI-driven "doctor" agent that forecasts Alzheimer's disease state, progression risk, and personalized therapy recommendations, grounded in standard clinical and cognitive indicators. We used Mutual Information-based feature selection and an 80/20 stratified methodology to test seven hybrid ensemble models on 2,149 Kaggle patients. The second hybrid model had an accuracy of 95.81% and an AUC of 0.9492, which was better than both classical and deep-learning baselines. All hybrid models did better than individual learners, showing that they could work well with a wide range of clinical data. With alzheime.pkl, it was possible to make real-time predictions with confidence ratings and comments on diagnoses that were like those made by doctors. Real-time diagnostic results that can be understood, along with confidence ratings and automated "doctor's notes," help research move from the lab to the clinic. Multimodal fusion, external validation, calibrated probability calculation, and explainable decision outputs make Alzheimer's screening fairer, faster, and clearer. The study shortens the time it takes to make a diagnosis and the amount of work that doctors must do, which is good for healthcare systems. Longitudinal, multi-omics, and wearable sensor data will be combined with privacy-preserving federated learning and explainable-AI dashboards to help doctors understand things better and use them in the real world.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.210 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.586 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.382 Zit.