Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Using leadership to leverage ChatGPT and artificial intelligence for undergraduate and postgraduate research supervision
42
Zitationen
4
Autoren
2023
Jahr
Abstract
ChatGPT and other artificial intelligence (AI) and large language models (LLMs) have hit higher education by storm. Much of the research focuses on how this – and similar – tools can be leveraged for effective education of undergraduate coursework students. In this study, we explore the emerging benefits and limitations of ChatGPT and LLMs in the context of undergraduate and postgraduate research supervision. What we found was that psychological need fulfilment, research student autonomy and relatedness were key outcomes that could be cultivated at the student level. At a unit or subject level, the opportunity for formative feedback was seen as a strength. We also discuss some key limitations to the tool, including how limited its ability to deconstruct social injustice and generate content appropriate to context. We used an example of leadership research to highlight that it may preference good outcomes and likewise present information related to current and normative practices rather than desired future practices. We conclude by considering the broad implications of this work on research supervision relationships. Implications for practice or policy: ChatGPT has the ability to enhance research higher degree research practices. AI and LLMs may support student psychological need fulfilment, autonomy, competence and relatedness. ChatGPT could provide preliminary formative feedback support for research and doctoral students prior to submitting drafts to supervisory teams. Policy safeguards are needed to ensure research responses to lack of context, data bias, equity concerns and lack of an ethical framework.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.