Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
<i>Editorial Commentary:</i> Chat Generative Pre‐Trained Transformer (ChatGPT) Provides Misinformed Responses to Medical Questions
0
Zitationen
1
Autoren
2024
Jahr
Abstract
Surgeons have dealt with the negative effects of misinformation from "Dr. Google" since patients started using search engines to seek out medical information. With the advent of natural language processing software such as Chat Generative Pre-Trained Transformer (ChatGPT), patients may have a seemingly real conversation with artificial intelligence software. However, ChatGPT provides misinformation in response to medical questions and responds at the reading level of a college freshman, whereas the U.S. National Institute of Health recommends medical information be written at a 6th-grade level. The flaw of ChatGPT is that it recycles information from the Internet. It is "artificially intelligent" because of its ability to mimic natural language, not because of its ability to understand and synthesize content. It fails to understand nuance or critically analyze new inputs. Ultimately, these skills require human intelligence, whereas ChatGPT provides responses that are exactly what you might expect-artificial.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.