Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Why prompting matters: achieving clinically accurate and consistent responses with Chat <scp>GPT</scp>
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Why prompting matters: achieving clinically accurate and consistent responses with Chat GPTArtificial intelligence (AI) has become a powerful tool in healthcare, offering the potential to analyse vast amounts of online information and provide personalised responses to user inquiries including patient education on their health.However, patient trust in AI-generated medical information remains a significant concern.A survey conducted by the University of Michigan revealed that 74% of adults aged >50 years did not trust AI-generated health information, while 68% were having trouble finding health information online [1].This concern underscores the necessity to evaluate AI-generated responses to ensure they do not perpetuate misinformation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.