Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Producing nuclear disaster prevention materials with artificial intelligence chatbots: comparison of ChatGPT-3.5, Copilot, and Gemini output with google search results
0
Zitationen
5
Autoren
2024
Jahr
Abstract
Objective : To compare the understandability, actionability, and readability of AI chatbot-generated text and webpage text about nuclear disasters. Methods : In this cross-sectional study, we compared the understandability, actionability, and readability of ChatGPT-3.5, Copilot, and Gemini generated texts and web page sentences about radiation. The keywords related to radiation were extracted using Google Trends. A Google search was performed using the extracted keywords, and the top 8 pages for each keyword were extracted. The AI chatbot generated two types of sentences: normal level and 6th grade level. The Japanese version of the Patient Education Materials Assessment Tool (PEMAT-P) was used to rate the understandability and actionability of each text. The higher the score, the higher the perceived ease of understandability and actionability and the cutoff for both was set at 70%. The jReadability was used to quantitatively assess the readability of Japanese texts. Results : With regard to understandability, Copilot (n = 22, 71.0%) and Gemini (n = 26, 92.9%) 6th grade level texts had significantly higher percentages of 70 or higher, while Google Search had a significantly lower percentage of 70 or higher (n = 58, 32.8%; p < .05). Gemini at the normal level (n = 69, 55.2%) and Copilot (n = 74, 55.6%) and Gemini (n = 73, 56.2%) at the 6th grade level had significantly higher percentages of very readable to somewhat difficult responses (p < .05). Conclusions : The Japanese sentences generated by the AI chatbot were easier to read than the Google search results; the prompt of 6th grade level improved the readability of Japanese sentences. Thus, the AI chatbot can be an effective tool to promote understanding of radiation disaster prevention.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.