Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Potentials and pitfalls of ChatGPT and natural-language artificial intelligence models for the understanding of laboratory medicine test results. An assessment by the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Working Group on Artificial Intelligence (WG-AI)
109
Zitationen
10
Autoren
2023
Jahr
Abstract
ChatGPT in its current form, being not specifically trained on medical data or laboratory data in particular, may only be considered a tool capable of interpreting a laboratory report on a test-by-test basis at best, but not on the interpretation of an overall diagnostic picture. Future generations of similar AIs with medical ground truth training data might surely revolutionize current processes in healthcare, despite this implementation is not ready yet.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.
Autoren
Institutionen
- Paracelsus Medical University(AT)
- Istituto Ortopedico Galeazzi(IT)
- University of Milano-Bicocca(IT)
- University of Osijek(HR)
- Ghent University Hospital(BE)
- KU Leuven(BE)
- Hospital Universitario Virgen Macarena(ES)
- Manisa Celal Bayar University(TR)
- Medical University of Vienna(AT)
- Istituto di Ricovero e Cura a Carattere Scientifico San Raffaele
- Vita-Salute San Raffaele University(IT)
- Istituti di Ricovero e Cura a Carattere Scientifico(IT)
- University of Padua(IT)