Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Would ChatPDF be advantageous for expediting the interpretation of imaging and clinical articles in PDF format?
2
Zitationen
11
Autoren
2024
Jahr
Abstract
Background: Currently, the landscape of AI-driven chatbots is growing and proposing new and extraordinarily beneficial solutions to common problems. In addition to powerful chatbots such as ChatGPT, Bing Chat, and Google Bard, which are wide-ranging applications capable of performing multiple different tasks, some applications focus on specific functionality to improve the user experience. One product that helps with document reading experiences is ChatPDF, which enables AIs to read, extract data, and reply to user inquiries. Our objective is to test this AI tool to improve data extraction from clinical articles. Methods: We assessed AI comprehension in 48 diverse scientific articles, examining sections such as the main topic, conclusions, results, materials and methods, sampling and data collection, and additional information from attached images, graphs, and tables. Five healthcare professionals, who possess expertise in emergency/abdominal radiology, thoracic radiology, musculoskeletal radiology, neuroradiology, and endocrinology, participated in an evaluation study. The primary objective was to assess answers using a revised Mean Opinion Score (MOS) scale (0 to 5 points) and compare performance across distinct article sections using the Gini heterogeneity index. Results: The finding demonstrated an adequate comprehension of the topic and the conclusions, a poor ability to extract information from the images, graphs, and tables attached to the article, and an insufficient performance regarding the methods of sampling and data collection. Conclusions: ChatPDF can be useful to extract principal information from PDF articles, but not further in-depth details. An accurate and thorough guide to their use is essential, considering the potential and critical issues already raised about these artificial intelligence (AI) systems. Ours is an attempt to explore the topic further, which certainly requires validation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.