Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence versus Software Engineers: An Evidence-Based Assessment Focusing on Non-Functional Requirements
9
Zitationen
3
Autoren
2023
Jahr
Abstract
<title>Abstract</title> The automation of Software Engineering (SE) tasks using Artificial Intelligence (AI) is growing, with AI increasingly leveraged for project management, modeling, testing, and development. Notably, ChatGPT, an AI-powered chatbot, has been introduced as a versatile tool for code writing and test plan generation. Despite the excitement around AI's potential to elevate productivity and even replace human roles in software development, solid empirical evidence remains scarce. Normally, a software engineer's solution is evaluated against a variety of non-functional requirements such as performance, efficiency, reusability, and usability, among others. This study presents an empirical exploration of the performance of software engineers versus AI on specific development tasks, using an array of quality parameters. Our aim is to enhance the interplay between humans and machines, increase the trustworthiness of AI methodologies, and identify the best performers for each task. In doing so, it also contributes to refining cooperative or human-in-the-loop workflows in the context of software engineering. The study investigates two distinct scenarios: the analysis of ChatGPT-produced code against developer-created code on Leetcode, and the comparison of automated machine learning (Auto-ML) and manual methods in the creation of a control structure for an Internet of Things (IoT) application. Our findings reveal that while software engineers excel in some scenarios, AI performs better in others. This groundbreaking empirical study helps forge a new pathway for collaborative human-machine intelligence where AI's capabilities can augment human skills in software engineering.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.521 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.412 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.891 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.575 Zit.