Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Integrating AI into Clinical Neurophysiology Labs: Challenges and Opportunities in Nerve Conduction Studies
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) is transforming healthcare by enhancing the analysis of complex biomedical data. In clinical neurophysiology, nerve conduction studies (NCS) and electromyography (EMG) are essential for diagnosing conditions like peripheral neuropathies and neuromuscular disorders. However, these techniques are prone to inter-operator variability, manual analysis challenges, and technical artifacts. Machine learning (ML) and deep learning (DL), including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformer models, are emerging as powerful tools to automate signal quality control, feature extraction, classification, and report generation. This paper reviews the opportunities and challenges of integrating AI into NCS workflows, covering applications such as artifact suppression, peak detection, and classification of axonal versus demyelinating pathologies. Key challenges include issues with heterogeneous datasets, domain shifts across devices, explainability, regulatory uncertainties, and ethical concerns like fairness, bias, and privacy. To address these, we propose a roadmap emphasizing dataset curation, rigorous model validation, human-in-the-loop workflows, and adherence to Good Machine Learning Practices (GMLP). By using AI as a supportive tool rather than a replacement, neurophysiology can achieve faster, more consistent, and accessible diagnostics while maintaining clinician oversight. We also highlight future directions, such as federated learning, wearable electrophysiology, and cloud-based platforms, aiming for the responsible adoption of AI to improve patient outcomes and precision medicine in neurodiagnostic practice.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.