Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Generative AI adoption and ethical perceptions: a comparative study of medical and non-medical researchers in Chinese universities
0
Zitationen
10
Autoren
2026
Jahr
Abstract
The integration of generative artificial intelligence (GenAI) into academia presents profound ethical challenges to research integrity. Disciplinary norms likely shape engagement with this technology, yet a comparative analysis between medical and non-medical research cultures is lacking. This study examines how this critical divide influences the adoption of GenAI, the development of ethically sensitive behaviors, and perceptions of oversight in Chinese universities. A cross-sectional online survey was administered to 5,731 active researchers (including postdoctoral fellows and doctoral, graduate, and undergraduate students) from 48 universities, categorized as medical (n = 1,935) or non-medical researchers (n = 3,796). The instrument measured the frequency of GenAI adoption, integrity awareness, self-reported behaviors (including data fabrication/falsification), and ethical attitudes. Analyses used the chi-square test and independent-samples t-tests. Non-medical researchers reported higher GenAI adoption (57.0% vs. 45.6%, p<.001), while medical researchers showed greater integrity awareness (83.1% vs. 76.1%, p<.001). Critically, non-medical researchers reported using GenAI for data fabrication/falsification more frequently (27.0% vs. 10.9%, p < .001). The medical researchers reported greater efficiency benefits (79.8% vs. 57.1%, p < .001) but held stricter views on the use of unverified AI-generated text (75.2% vs. 65.3%, p < .001). Non-medical researchers expressed greater confidence in detecting misconduct (71.7% vs. 58.8%, p<.001). There are two very different approaches to using AI in research, and this creates a key ethical challenge. Medical researchers are more cautious, working within strict rules but having less confidence in current oversight, which makes patient safety their top concern. In contrast, researchers in other fields often adopt AI more readily and report engaging in riskier practices. Because of this major divide, a single, universal policy for AI ethics is inadequate. Instead, our findings point to the value of distinct guidelines tailored to each context: for medical research, the focus must be on transparency and protecting patients; for non-medical fields, the priorities should be tracking where content comes from and establishing realistic oversight. This flexible, principle-based strategy is essential for protecting the integrity of research across all disciplines.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.