OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 27.03.2026, 19:51

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

ChatGPT takes on a phonology exams: Analyzing its characteristics, errors, and reasoning ability

2023·0 Zitationen·Language and Information
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2023

Jahr

Abstract

This study aimed to test ChatGPT’s ability to answer linguistics (phonology) exam questions by comparing its responses to those of two graduate students in linguistics. GPT-3.5 and GPT-4 versions were employed. ChatGPT performed similarly to students on true-false and multiple-choice questions but scored higher on conceptual questions. Although ChatGPT explained phonological concepts well, it performed poorly in problem-solving questions requiring inference using such concepts. In particular, ChatGPT correctly analyzed questions composed from previously distributed data but did not conduct any meaningful analyses based on the given data for questions modifying the phonetic environments of phonological processes or phonemic compositions in words. Overall, while the students wrote brief answers covering the key content, ChatGPT tended to provide answers with general content or described content that was not directly related to the problems. Therefore, ChatGPT’s answers could be easily distinguished from those of the students. In post-hoc experiments using “step-by-step” and “bullying,” GPT-4 outperformed GPT-3.5. Specifically, GPT-4 derived accurate phonemic analysis results from a small number of “bullying” questions for the problem modifying the phonetic environment. Thus, GPT-4’s reasoning ability in analyzing new data was superior to that of GPT-3.5.

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen