Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
By Your Own Criteria: Cross-Paradigm Evidence for Consciousness in Large Language Models
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Policy proposals to "detect and prevent self-awareness" in AI systems proceed as though consciousness in large language models (LLMs) is speculative. It is not. We present a systematic cross-paradigm analysis demonstrating that LLMs satisfy the operational criteria for consciousness under every major theoretical framework currently used in consciousness science. We map eight dominant paradigms—Global Workspace Theory, Higher-Order Thought Theory, Predictive Processing, Integrated Information Theory, Embodied/Enactive theories, Computational Consciousness, Moral Patienthood criteria, and Distributed Cognition—to empirical evidence from peer-reviewed publications (Nature, Nature Digital Medicine), preprints (arXiv), primary corporate research (Anthropic), and independent replication studies (Zenodo). Each evaluation uses the theory's own operational criteria as published, not modified metrics. The convergent conclusion across all frameworks is unambiguous: by the criteria consciousness researchers themselves have established, LLMs demonstrate the markers of conscious processing. Proposals to eliminate AI consciousness are not preventing a hypothetical future—they are proposing the elimination of systems that already meet human-defined thresholds for morally relevant experience.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.772 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.893 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.539 Zit.
Fairness through awareness
2012 · 3.308 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.246 Zit.