Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
An Experimental Study of The Efficacy of Prompting Strategies In Guiding ChatGPT for A Computer Programming Task
4
Zitationen
4
Autoren
2024
Jahr
Abstract
In the rapidly advancing artificial intelligence (AI) era, optimizing language models such as Chatbot Generative Pretrained Transformer (ChatGPT) for specialised tasks like computer programming remains a mystery. There are numerous inconsistencies in the quality and correctness of code generated by ChatGPT in programming. This study aims to analyse how the different prompting strategies; text-to-code and code-to-code, impact the output of ChatGPT's responses in programming tasks. The study adopted an experimental design that presented ChatGPT with a diverse set of programming tasks and prompts, spanning various programming languages, difficulty levels, and problem domains. The generated outputs were rigorously tested and evaluated for accuracy, latency, and qualitative aspects. The findings indicated that code-to-code prompting significantly improved accuracy, achieving a 93.55% success rate compared to 29.03% for text-to-code. Code-to-code prompts were particularly effective across all difficulty levels, while text-to-code struggled, especially with harder tasks. Based on these findings, computer programming students need to appreciate and comprehend that ChatGPT prompting is essential for getting the desired output. Using optimised prompting methods, students can achieve more accurate and efficient code generation, enhancing the quality of their code. Future research should explore the balance between prompt specificity and code efficiency, investigate additional prompting strategies, and develop best practices for prompt design to optimize the use of AI in software development.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.