OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 20.04.2026, 23:27

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Complex questions and quality answers: Comparing <scp>ChatGPT</scp> and Gemini as research collaborators

2026·0 Zitationen·Journal of the Association for Information Science and Technology
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2026

Jahr

Abstract

Abstract AI chatbots are increasingly popular, but how they handle complex questions and how this affects the quality of their answers remains underexplored. This study examined whether chatbots such as ChatGPT and Gemini provide high‐quality answers to users' questions. To determine whether LLMs provided accurate, complete responses and support for further assistance, and addressed different difficulty levels and question types, we used ChatGPT 4o‐mini and Gemini 1.5 Flash to analyze 84 authentic library reference questions of varying complexity and types. Our analyses demonstrated a strong, statistically significant association between question complexity (READ) levels and further assistance. ChatGPT4o‐mini suggests that as complexity increases, it provides more resources but still fails to give a complete answer, whereas Gemini 1.5 Flash also reflected a significant association between question type and completeness. We conclude that, compared with ChatGPT 4o‐mini, Gemini 1.5 Flash is sensitive to all question types, suggesting it can provide more consistently high‐quality answers. These findings suggest that understanding the relationship between question complexity and answer quality can optimize LLMs for better information seeking. As LLMs are continually updating, this study used ChatGPT‐4o‐mini and Gemini 1.5 Flash. Future research should evaluate newer LLMs and human responses using a comparative methodology.

Ähnliche Arbeiten