Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Hermeneutics Phenomenological Study on the Untold Experiences of SGADA-BARMM Employees on Utilizing ChatGPT
0
Zitationen
1
Autoren
2026
Jahr
Abstract
This study revolves around how SGADA-BARMM employees utilize ChatGPT in their daily operations. This qualitative research employing Hermeneutic Phenomenology aimed at determining the experiences of SGADA – BARMM Employees on utilizing ChatGPT in their operational functions, their challenges experienced on utilizing ChatGPT in the context of the quality deliverables and outputs, their suggestions to improve the use of ChatGPT by government employees in their daily operational functions, and implications for the quality management system. Results show that the experiences of SGADA – BARMM employees on using ChatGPT in their Operational Functions transition from blank page drafter to fast reviser, cognitive scaffold for structure & context, prompt-engineering frustration & skill acquisition, and ambivalent feelings. Challenges identified include connectivity issues in remote governance settings, a mismatch of AI outputs with local contexts, prompting challenges, and the demand for specificity, verification, and critical editing of outputs, concerns about dependency and intellectual laziness, and misinterpretation due to ambiguity and abbreviations. Suggestions posed are demand for AI literacy and structured training, responsible use: assistive but not authoritative, integration into document-centered workflows, gaps in awareness and digital confidence, and infrastructure and access improvements. Implications of this study revolve around professional quality writing and communication, speed and quality trade-off, human oversight and ISO alignment, AI literacy, prompt engineering, policy support, and long-term systemic effects.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.