Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Toward Secure and Standardized AI Frameworks in Metaverse-Enabled Cognitive Digital Twins for Healthcare
0
Zitationen
8
Autoren
2025
Jahr
Abstract
A new generation of consumer electronics known as the “metaverse” leverages the ideas, scope, and protocols of the internet to provide a stable, decentralized virtual environment that is particularly helpful in the medical area. This technical advancement includes the fusion of cutting-edge digital technologies such as blockchain, 3D modeling, augmented and virtual reality-enabled wearable sensors, the consumer Internet of Things, and Autonomous Artificial Intelligence (AAI). The metaverse has given rise to a variety of study paradigms in several domains. Its advancement increases the flexibility of virtual therapy, immersive health education and training, cognitive trials, and other session-related activities in the healthcare industry. This paper explores the effects of digital gaming therapy on the current state of health diagnosis and treatment within this investigation. After identifying these aspects, this paper proposes an innovative and secure architecture, called Meta-EGamofy, to overcome significant issues in cognitive healthcare. We offer a design solution for diagnosing phobias, conducting therapy, and providing treatment through the Sandbox platform by using an immersive game-based therapy method. Extending the definition of the digital twin in meta-EGamofy allows for system management, real-time monitoring, and cognitive health assessments, all while supporting the preservation of the consultant-patient relationship. We prioritize consortium-distributed network-enabled wearable sensor resource optimization in the development of this immersive healthcare solution. Therefore, autonomous artificial intelligence is crucial, especially when it comes to the use of artificial neural networks. In addition, meta-EGamofy may examine customized risk factors, such as vital signs, behaviors, and firsthand observations, supporting consultants in carrying out more thorough evaluations.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.