Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The challenges for interpretable AI for well-being -understanding cognitive bias and social embeddedness-
0
Zitationen
2
Autoren
2019
Jahr
Abstract
In this AAAI Spring symposium 2019, we discuss interpretable AI in the context of well-being AI. Interpretable AI is an artificial intelligence methods and systems, of which outputs can be easily understood by humans. Especially in the human health and wellness domains, making wrong predictions may lead to critical judgements in life or death situations. AI based systems must be well-understood. We define “well-being AI” as an AI research paradigm for promoting psychological well-being and maximizing human potential. Interpretable AI is important for well-being AI in senses that (1) to understand how our digital experience affects our health and our quality of life and (2) to design well-being systems that put humans at the center. One of the important keywords in understanding machine intelligence in human health and wellness is cognitive bias. Advances in big data and machine learning should not overlook some new threats to enlightened thought, such as the recent trend of social media platforms and commercial recommendation systems being used to manipulate people's inherent cognitive bias. The second important keyword is “social embeddedness”. Cognitive bias will be affected by how the AI is perceived particularly at the community or social level. Social embeddedness is the social science idea that actions of individuals are refracted by the social relations within their community. In our contexts, understanding relationships between AI and society is very important, which includes the issues on AI and future economics (such as basic income, impact of AI on GDP), or “well-being society (such as happiness of citizen life quality). This paper describes the detailed motivation, important keywords, the scope of interests and research questions in this symposium.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.702 Zit.
Generative Adversarial Nets
2023 · 19.895 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.323 Zit.
"Why Should I Trust You?"
2016 · 14.544 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.195 Zit.