Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Terms of (Ab)Use: An Analysis of GenAI Services
0
Zitationen
5
Autoren
2026
Jahr
Abstract
Generative AI services like ChatGPT and Gemini are some of the fastest-growing consumer services. Individuals using such services must accept their terms of use before access, and conform to these terms for continued use of the service. Established literature has shown that despite their status as legally-binding agreements, terms of use are not actually well-understood, and may contain implications that are surprising for consumers. In this paper, we analyse the terms of 6 generative AI services from the perspective of an EU-based consumer. Our findings, based on a developed codebook which we provide in the paper, reiterate known issues regarding generative AI services such as the default use of user data for training and surface new concerns regarding responsibility, liability, and rights. All terms in our analysis contained language that explicitly discards assurances regarding the quality, availability and appropriateness of the service, regardless of whether the service is free or paid. The terms also make users solely responsible for outputs meeting norms dictated by the provider, despite no information or control being provided over the functioning of the model, and at the risk of account termination. The terms further restrict users in how outputs can be used while service providers utilise both user-provided inputs as well as user-liable outputs for a wide variety of purposes at their discretion. The implications of these practices are severe, as we find consumers suffer from lack of necessary information, significant imbalance of power, and have responsibilities they cannot materially fulfil without violating the terms. To remedy this situation, we make concrete recommendations for authorities and policymakers to urgently upgrade existing consumer protection mechanisms to tackle this growing issue.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.