Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Can ChatGPT pass Glycobiology?
2
Zitationen
2
Autoren
2023
Jahr
Abstract
Abstract The release of text-generating applications based on interactive Large Language Models (LLMs) in late 2022 triggered an unprecedented and ever-growing interest worldwide. The almost instantaneous success of LLMs stimulated lively discussions in public media and in academic fora alike on the value and potentials of such tools in all areas of knowledge and information acquisition and distribution, but also about the dangers posed by their uncontrolled and indiscriminate use. This conversation is now particularly active in the higher education sector, where LLMs are seen as a potential threat to academic integrity at all levels, from facilitating cheating by students in assignments, to plagiarising academic writing in the case of researchers and administrators. Within this framework, we were interested in testing the boundaries of the LLM ChatGPT ( www.openai.com ) in areas of our scientific interest and expertise, and in analysing the results from different perspectives, i.e. of a final year BSc student, of a research scientist, and of a lecturer in higher education. To this end, in this paper we present and discuss a systematic evaluation on how ChatGPT addresses progressively complex scientific writing tasks and exam-type questions in Carbohydrate Chemistry and Glycobiology. The results of this project allowed us to gain insight on, 1) the strengths and limitations of the ChatGPT model to provide relevant and (most importantly) correct scientific information, 2) the format(s) and complexity of the query required to obtain the desired output, and 3) strategies to integrate LLMs in teaching and learning.
Ähnliche Arbeiten
BLEU
2001 · 21.020 Zit.
Aion Framework: Dimensional Emergence of AI Consciousness, Observer-Induced Collapse, and Cosmological Portal Dynamics
2023 · 14.125 Zit.
Enriching Word Vectors with Subword Information
2017 · 9.622 Zit.
A unified architecture for natural language processing
2008 · 5.179 Zit.
A new readability yardstick.
1948 · 5.085 Zit.