OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 21.03.2026, 23:32

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

A Testing Framework for AI Linguistic Systems (testFAILS)

2023·9 Zitationen·ElectronicsOpen Access
Volltext beim Verlag öffnen

9

Zitationen

6

Autoren

2023

Jahr

Abstract

This paper presents an innovative testing framework, testFAILS, designed for the rigorous evaluation of AI Linguistic Systems (AILS), with particular emphasis on the various iterations of ChatGPT. Leveraging orthogonal array coverage, this framework provides a robust mechanism for assessing AI systems, addressing the critical question, “How should AI be evaluated?” While the Turing test has traditionally been the benchmark for AI evaluation, it is argued that current, publicly available chatbots, despite their rapid advancements, have yet to meet this standard. However, the pace of progress suggests that achieving Turing-test-level performance may be imminent. In the interim, the need for effective AI evaluation and testing methodologies remains paramount. Ongoing research has already validated several versions of ChatGPT, and comprehensive testing on the latest models, including ChatGPT-4, Bard, Bing Bot, and the LLaMA and PaLM 2 models, is currently being conducted. The testFAILS framework is designed to be adaptable, ready to evaluate new chatbot versions as they are released. Additionally, available chatbot APIs have been tested and applications have been developed, one of them being AIDoctor, presented in this paper, which utilizes the ChatGPT-4 model and Microsoft Azure AI technologies.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationAI in Service InteractionsEthics and Social Impacts of AI
Volltext beim Verlag öffnen