Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Generative AI in Research Group Formation: Academic Perceptions and Institutional Pathways
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Objective: This study provides timely insights into how faculty perceive the role of generative AI in academic collaboration and offers a case study on aligning institutional policy with emerging technological opportunities in higher education. It investigates how generative artificial intelligence (AI) tools are perceived and utilized in the formation of academic research groups, focusing on faculty at the University of Jordan. Design/Methodology: A descriptive cross-sectional study involving a mixed-methods survey of 100 faculty members primarily principal investigators (PI) was conducted, gathering quantitative data on AI familiarity, usage across research group (RG) planning tasks, perceived benefits and risks, and qualitative feedback on recommended institutional actions. Findings: The results indicate moderate adoption of generative AI in RG formation, especially for creative and writing tasks, with younger and junior faculty significantly tending to be more optimistic about AI’s benefits (e.g., increased efficiency, improved content quality) than senior faculty, who reported having greater concerns. The top concerns identified include data privacy, academic integrity (plagiarism), the accuracy of AI outputs, and overreliance on AI at the expense of human expertise. Despite reservations, a large majority agree on the need for official policies and training to guide AI’s ethical and effective use. Conclusion: The findings underscore a generational divide in attitudes, suggesting targeted interventions to support senior academics and influence juniors’ interest. Institutions should craft clear guidelines, provide training, and ensure access to AI tools to facilitate interdisciplinary collaboration and innovation, while safeguarding academic standards.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.