OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 13:38

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Tutorial: Fundamentals of (and Tools for) Trustworthy Artificial Intelligence in Smart Health

2025·1 Zitationen
Volltext beim Verlag öffnen

1

Zitationen

1

Autoren

2025

Jahr

Abstract

Outline of the Tutorial Artificial Intelligence (AI) is pervading many aspects of our society. This poses challenges to avoid people being put aside when their own data are processed by AI systems, which provide decisions that may result in harmful discrimination. Our focus is on knowledge representation and how to enhance human-centered information processing in the context of Trustworthy AI. Endowing AI with trustworthiness encompasses technical and non-technical challenges. In this tutorial, in addition to technical aspects (i.e., disruptive human-centered technologies as well as human-friendly computer tools aimed at covering all phases of the design, analysis, and evaluation of trustworthy intelligent systems), we will also discuss Ethical, Legal, Socio-Economic and Cultural (ELSEC) implications of AI. Special emphasis will be placed on certifying if intelligent systems comply with European values. Assuming explainability as a prerequisite for trustworthiness, Explainable AI (XAI in short) is an endeavor to evolve AI methodologies and technology by developing intelligent systems capable of generating decisions that a human can understand, but also capable of explicitly explaining their decisions. This way, it is possible to scrutinize the underlying intelligent models and verify if automated decisions are made based on accepted rules and principles so that decisions can be trusted and their impact justified. Accordingly, intelligent systems are expected to naturally interact with humans, thus providing comprehensible explanations of decisions automatically made. Even if this tutorial will introduce the main concepts and methods in the context of XAI in general, a major focus will be on how to properly deal (and compute) with words and perceptions in generating and evaluating textual explanations for smart health. More precisely, we will consider the explainable design of Fuzzy Sets and System in combination with pre-trained Large Language Models for paving the way from interpretable machine learning to Trustworthy AI. Such systems deal naturally with uncertainty and approximate reasoning (as humans do) through computing with words and perceptions. This way, they facilitate humans to scrutinize the underlying intelligent models. Moreover, human-AI interaction is natural and faithful

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen