Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Operationalizing GenAI: a framework protocol for auditable and responsible GenAI use in academic research
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Purpose The purpose of this study is twofold: First, to document current generative artificial intelligence (GenAI) opinions and practices among leading hospitality and tourism scholars; and second, to introduce the Generative AI Research Integration Framework (GRAIF) as a systematic methodology for a transparent, stage-specific use in academic research. The authors further illustrate the GRAIF’s feasibility through self-application to provide an auditable template for researchers. Design/methodology/approach Thirty-one leading hospitality and tourism scholars (journal editors, full professors and methods specialists) responded to eight open-ended questions on GenAI in academic research. Several themes are identified based on participants” responses using ChatGPT. Then, the authors manually reviewed the data and revised and refined the initial themes generated by ChatGPT. Findings Developed from expert consensus among 31 editors and prominent scholars, the authors introduced the GRAIF, which offers a transparent, auditable, six-module workflow protocol. The GRAIF protocol can guide researchers on where and how GenAI can be used, and transparently informs reviewers, editors and other stakeholders about how GenAI was used in the research process. Practical implications The GRAIF protocol offers flexible adoption, wherein researchers can use specific modules, such as Module W for writing and editing or Module A for analysis and coding. PhD supervisors can embed the GRAIF as an AI use compliance sheet in proposals for early alignment and planning. Similarly, journal editors can request from authors to submit a GRAIF sheet to assure responsible and transparent use of GenAI, and to facilitate effective manuscript evaluation. Originality/value This study makes a significant contribution to academic practice by providing a practical and auditable solution that goes beyond the fragmented and often divergent guidelines currently available for GenAI use in academic research. In doing so, this study addresses a critical gap in responsible GenAI use aiming for enhanced research transparency and rigor.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.