Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Continuous, Corrigible Judgment Execution: What Makes AI Systems Feel Like Collaborators
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Large language model–based (LLM-based) systems are increasingly used through sustained, interactive engagement rather than isolated, one-shot invocation. In practice, users observe intermediate results, correct reasoning, and guide how decisions are made as work unfolds. This paper analyzes this mode of use by focusing on judgment execution as an interactional process. We distinguish between episodic and continuous judgment execution and show why continuous, corrigible judgment execution is especially effective for underspecified, uncertainty-driven tasks. By identifying a small set of recurring interaction primitives, the paper explains how LLM-based systems support collaborative work in domains such as writing, research, and software development. This perspective clarifies why such systems feel intelligent in use and provides a foundation for future work on the design and governance of judgment-oriented AI systems.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.495 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.853 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.372 Zit.
Fairness through awareness
2012 · 3.265 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.