Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Bullying an AI? Misbehavior toward an AI moral patient in interactive marketing
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Purpose We examined how people perceive artificial intelligence (AI) as a moral patient – the target of morally wrong actions by humans. Design/methodology/approach We did a thorough literature review by integrating perspectives from the literature on marketing, management and human–AI interaction and proposed the theoretical framework. Findings Following this notion, we spotlighted a novel typology by classifying different situations in which AI is most likely to be involved in ethical issues and combined the perceived experience of AI with the moral intensity of the tasks performed by AI. Furthermore, we outlined the mechanisms that lead to people's misbehavior toward AI, by examining the extrinsic cost and intrinsic psychological cost of misbehavior. Originality/value AI is rapidly changing the way service encounters take place and transforming people's overall experience. Although interactions between humans and AI are gradually becoming part of individuals' everyday lives, people's moral consideration of AI and their ethical behavior toward it is still unclear. Therefore, our research enhances the understanding of how AI-related ethical issues.
Ähnliche Arbeiten
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller
1999 · 5.633 Zit.
An experiment in linguistic synthesis with a fuzzy logic controller
1975 · 5.583 Zit.
A FRAMEWORK FOR REPRESENTING KNOWLEDGE
1988 · 4.551 Zit.
Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
2023 · 3.431 Zit.