Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Trustworthy AI in Healthcare
2
Zitationen
1
Autoren
2024
Jahr
Abstract
The rapid integration of artificial intelligence (AI) into medical informatics, particularly in the context of mental health data, can bring about significant transformations in healthcare decision-support systems. However, ensuring that AI gains widespread acceptance and is regarded as reliable in healthcare requires addressing critical issues concerning its robustness, fairness, and privacy. This chapter presents a comprehensive study that delves into the urgent need for dependable AI in medical informatics, explicitly focusing on collecting mental health data using sensors. The authors put forth a methodological framework combining cutting-edge AI techniques, leveraging deep learning models such as recurrent neural networks (RNN), including variants like LSTM and GRU, and ensemble techniques like random forest, AdaBoost, and XGBoost. Through a series of experiments involving healthcare decision support systems, the authors underscore the pivotal role of model overfitting in establishing trustworthy AI systems.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.