Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence in Dialysis Care
0
Zitationen
1
Autoren
2024
Jahr
Abstract
Today, in our everyday lives, artificial intelligence (AI) applications are omnipresent, from voice recognition to route planning, fraud protection, and on-line shopping recommendations. The pervasiveness of AI in our personal lives begets the question “What is the role of AI in dialysis care?”. In dialysis care, three main application areas for “classical” AI (i.e., non-generative AI) have emerged [1]: prediction, therapy recommendation, and diagnosis. Examples are the prediction of hospital admissions and prediction of intradialytic hypotension in real-time. The former was implemented already in clinical practice and resulted in a lower hospitalization rate [2]. The latter involves the cloud-based integration of dialysis machine data, and electronic health records with AI-powered prediction algorithms [3]. In several countries AI applications are used to give anemia therapy recommendations [4] that are subsequently reviewed by health care providers prior to prescription. The application of such an AI tool has resulted in improved attainment of target hemoglobin targets, less severe anaemia, and lower ESA utilization [5]. Regarding diagnostics, AI systems have been developed to categorize arterio-venous aneurysms as advanced / non-advanced [6]. In addition, natural language processing, an AI method to extract insights from health care provider notes, has been shown to be superior to billing codes to identify symptom burden in hemodialysis patients [7]. The recent advent of generative AI and large language models (LLM) such as ChatGPT has instigated research into applications in dialysis care. For example, LLMs are explored to support renal dietitians and provide better personalized care. So far, results have shown only moderate performance of LLMs [8]. It is expected that AI applications, both “classical” and “generative”, will be expanded in the future. Moving forward, it will be critically important to recognize potential flaws of AI systems, such as biases and it “black box” character, and to respect ethical and privacy concerns.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.