Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Toward trustworthy AI in systematic reviews: a statistically validated AI-augmented framework for analysing knowledge transfer strategies in urban water management
0
Zitationen
4
Autoren
2026
Jahr
Abstract
The exponential expansion of academic literature across complex environmental domains has created a gap where the volume of research outpaces human capacity for effective integration. While Large Language Models (LLMs) offer a transformative solution to bridge this gap, their deployment in rigorous scientific inquiry is frequently compromised by model stochasticity, potential for hallucination, and the opacity of automated reasoning. Addressing the critical imperative for dependable and reproducible AI, this study presents a robust workflow designed to ensure methodological rigor and evidential integrity in the rapid and reliable synthesis of large-scale scientific literature.We operationalised this framework within the domain of urban water management, specifically to analyse complex Knowledge Transfer (KT) strategies from a corpus of over 1,500 unstructured articles. To mitigate the risks inherent in generative AI, we developed a multi-layered validation protocol. First, we deployed an AI-assisted screening mechanism to filter the initial corpus down to 115 highly relevant articles, ensuring data relevance. Second, we implemented a Human-in-the-Loop design to iteratively synthesise a comprehensive analytical framework. By refining LLM-generated insights against domain expertise, we consolidated 24 operational attributes that specifically characterise the operational mechanisms of learning strategies from the corpus, preventing ungrounded inference while capturing emerging learning dynamics. Third, we addressed model variability through iterative Multi-LLM Triangulation (utilising Gemini, ChatGPT, and Deepseek). By repeatedly coding the 115 articles with the framework, we quantified qualitative insights to analyse how distinct learning strategies manifest their operational mechanisms. Finally, we employed Multiple Correspondence Analysis (MCA) and Hierarchical Clustering (HAC) to analyse the quantified results, categorising the eight identified learning strategies into three distinct clusters based on their functions and usage contexts, thereby effectively harnessing the LLM-generated insights.Beyond this specific application, this research contributes a methodological blueprint for responsible AI integration in scientific inquiry. It demonstrates that combining theory-driven constraints with statistical verification is essential to elevate LLM-generated insights to the standard of reproducible scientific evidence.
Ähnliche Arbeiten
2019 · 31.742 Zit.
Techniques to Identify Themes
2003 · 5.391 Zit.
Answering the Call for a Standard Reliability Measure for Coding Data
2007 · 4.082 Zit.
Basic Content Analysis
1990 · 4.045 Zit.
Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts
2013 · 3.071 Zit.