Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Mirror Neuron Perspective in the Era of Artificial Intelligence: Conformity Behavior in Cancer Disclosure Scenarios in Medical PBL Education
0
Zitationen
6
Autoren
2025
Jahr
Abstract
In the era of artificial intelligence, medical education is increasingly integrating intelligent perception, edge computing, and neurobehavioral monitoring to shift from knowledge indoctrination to cognitive regulation. In problem-based learning (PBL), medical students often encounter social pressure that may trigger conformity behaviors. Mirror neuron theory offers a neurobiological lens to understand such behaviors under stress, yet the underlying neural mechanisms remain underexplored. This study investigates how stress-induced functional connectivity between the right inferior frontal gyrus (IFG) and anterior insula (aINS) influences conformity behaviors in cancer disclosure scenarios. Using functional near-infrared spectroscopy (fNIRS) and mediation analysis, the study assessed neural and behavioral responses of medical students in a PBL context under standardized stress-inducing tasks. Results showed that 64.3% of participants mimicked ambiguous prognostic statements, with high assessment scores (Cohen’s [Formula: see text], [Formula: see text]). Empathy-deficient behaviors were observed in 52.1%, and nonverbal mimicry in 38.5% ([Formula: see text]). Stress significantly enhanced IFG–aINS connectivity ([Formula: see text], [Formula: see text]), indirectly promoting conformity (indirect effect [Formula: see text] 0.28, 95% CI [0.15, 0.41]) and exerting a direct effect as well ([Formula: see text], [Formula: see text]). These findings reveal that stress facilitates conformity by strengthening IFG–aINS connectivity, offering a neurobiological explanation for group behavior in medical education. The study also provides theoretical support for future AI- and edge computing-based neural feedback interventions aimed at reducing conformity and enhancing autonomous decision-making in medical training.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.