Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI, Explainability, and Safeguarding Patient Safety in Europe
17
Zitationen
2
Autoren
2022
Jahr
Abstract
This chapter explores the efforts made by regulators in Europe to develop standards concerning the explainability of artificial intelligence (AI) systems used in wearables. Diagnostic health devices such as fitness trackers, smart health watches, ECG and blood pressure monitors, and other biosensors are becoming more user-friendly, computationally powerful, and integrated into society. They are used to track the spread of infectious diseases, monitor health remotely, and predict the onset of illness before symptoms arise. At their foundation are complex neural networks making predictions from a plethora of data. While their use has been growing, the COVID-19 pandemic will likely accelerate that rise as governments grapple with monitoring and containing the spread of infectious diseases. One key challenge for scientists and regulators is to ensure that predictions are understood and explainable to legislators, policymakers, doctors, and patients to ensure informed decision making.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.380 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.243 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.671 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.496 Zit.