Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Large Language Model (LLM)-Powered Chatbots Fail to Generate Guideline-Consistent Content on Resuscitation and May Provide Potentially Harmful Advice
45
Zitationen
2
Autoren
2023
Jahr
Abstract
The LLM-powered chatbots' advice on help to a non-breathing victim omits essential details of resuscitation technique and occasionally contains deceptive, potentially harmful directives. Further research and regulatory measures are required to mitigate risks related to the chatbot-generated misinformation of public on resuscitation.
Ähnliche Arbeiten
Ventilation with Lower Tidal Volumes as Compared with Traditional Tidal Volumes for Acute Lung Injury and the Acute Respiratory Distress Syndrome
2000 · 12.745 Zit.
Early Goal-Directed Therapy in the Treatment of Severe Sepsis and Septic Shock
2001 · 10.717 Zit.
Acute renal failure – definition, outcome measures, animal models, fluid therapy and information technology needs: the Second International Consensus Conference of the Acute Dialysis Quality Initiative (ADQI) Group
2004 · 6.778 Zit.
Treatment of Comatose Survivors of Out-of-Hospital Cardiac Arrest with Induced Hypothermia
2002 · 5.400 Zit.
Mild Therapeutic Hypothermia to Improve the Neurologic Outcome after Cardiac Arrest
2002 · 5.202 Zit.