Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Lack of Other Minds as the Lack of Coherence in Human–AI Interactions
2
Zitationen
1
Autoren
2025
Jahr
Abstract
As artificial intelligence (AI) undergoes rapid evolutionary advancements, two enduring queries in the philosophy of language and linguistics persist: the problem of other minds and coherence. This can be further explored by the following question: is there a fundamental difference between human-AI interactions and human–human interactions? More precisely, does an AI partner’s ability to understand discursive coherence sufficiently approximate that of the human mind? This study frames the problem of other minds as a problem in discourse analysis, positing that linguistic exchange inherently constitutes interactions between minds, where the act of decoding discursive coherence serves as a proxy for apprehending other minds. Guided by this perspective, this study uses four criteria of discursive coherence to examine how AI partners (with a focus on ChatGPT) achieve discursive coherence, thus reflecting whether an AI partner’s ability to understand discursive coherence suffices to simulate the human mind. Through a comparison between human–human interactions and human-AI interactions, the results indicate that while ChatGPT demonstrates proficiency in constructing discursive coherence along dictional, intentional, emotional, and rational coherence lines, the structural complexity and generative creativity of its coherence lines remain significantly below the threshold observed in human–human interactions. Moreover, ChatGPT’s emotional expressiveness pales in comparison to the rich, nuanced affect inherent in human–human interactions.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.299 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.198 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.098 Zit.