Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The AI-Human Unethicality Gap: Plagiarizing AI-generated Content Is Seen As More Permissible
8
Zitationen
3
Autoren
2023
Jahr
Abstract
The emergence of generative AI has raised unprecedented concerns about plagiarism. We present six preregistered studies demonstrating that plagiarizing material created by AI is seen as less unethical and more permissible than plagiarizing material created by a human—an AI-human unethicality gap. Students report having plagiarized more from AI than human-generated content in the past (Study 1) and indicate greater willingness to do so in their school assignments, even when ease and convenience of accessing such content are held constant (Study 2). Moreover, people judge plagiarizing AI-generated content as less unethical and more permissible than plagiarizing human-generated content and are less likely to view it as plagiarism (Study 3). Rather than being due to differences in legal ownership (Study 4), the AI-human unethicality gap is explained by psychological ownership over the copied material (Studies 4 and 5). AI is perceived as owning the content it creates to a lesser extent than humans: when using content produced by AI (vs. humans), users are afforded greater psychological ownership over the content, reducing the perceived unethicality of passing off the content as their own. Differences in psychological ownership appear to stem from ascriptions of sentience to the content creator: imbuing AI with sentience attenuates differences in perceived ownership and in turn the AI-human unethicality gap (Study 6). These findings contribute to understanding the social effects of AI, attribution of psychological ownership, and navigating plagiarism in the age of AI.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.287 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.140 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.534 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.450 Zit.