OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 00:29

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

From Prohibition to Provenance: A Working-Paper Demonstration of Human–AI Cognitive Contribution Assessment

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

This record presents a bundled working-paper demonstration of a proposed future system for evaluating human–AI cognitive contribution in research and education. It is released deliberately as a transparent simulation and virtue exercise, rather than as a conventional single-paper publication. The record contains four related documents, intentionally co-located to model how future assessment systems might operate through provenance, reflexive evaluation, and visible human judgement, rather than prohibition, detection, or automated attribution: From Prohibition to Provenance: Rating Human–AI Cognitive Contribution in Research and EducationThe primary working paper proposing the Cognitive Contribution Rating (CCR) framework. It argues for a shift from tool-based governance (“was AI used?”) toward judgement-based evaluation (“where did human responsibility intervene?”), introducing a two-axis model distinguishing structural and applied cognitive contribution. Assessing Cognitive Contribution: An Evaluative Application of the CCR FrameworkA standalone assessment paper applying the CCR framework to the primary work as a post-publication transparency audit. It substantiates contribution claims where evidence permits, identifies verification limits, and demonstrates how the framework behaves under scrutiny without claiming automated or definitive attribution. CCR Framework Assessment Sheet for External ReviewAn external evaluative artefact simulating educational or third-party assessment of the primary paper using CCR-aligned criteria. It is included to illustrate how independent review might interact with declared cognitive provenance in practice. Dialogic Process Notes: Simulated Human–AI InteractionA non-authoritative process artefact presented in dialogic form to illustrate the interactive forces—questioning, resistance, reframing, constraint, and acceptance—through which human judgement is exercised in AI-assisted research. The dialogue is abstracted and illustrative, not a transcript, and does not attribute authority or responsibility to non-human participants. The inclusion of a framework proposal, internal assessment, external review simulation, and dialogic process notes within a single record is deliberate. It is intended to function as a worked example of how future research and educational assessment systems might foreground transparency, human responsibility, and evaluative plurality without resorting to surveillance, gatekeeping, or binary classifications. Readers are invited to treat this record both as a substantive contribution and as a methodological experiment. None of the accompanying assessments constitute endorsement or certification; final judgement remains human, contextual, and open to challenge. The framework articulated herein is not designed as a plug-and-play utility for the current adversarial landscape, nor as a standalone instrument of enforcement. Rather, it is conceived as a foundational architecture for a future institutional state—one in which AI systems are integrated within registered, permissioned environments (e.g., “walled-garden” research or university infrastructures). Within such managed ecosystems, the framework serves as an evidentiary scaffold for the visible exercise of human agency and judgement. Its purpose is to transition governance from the punitive policing of outputs toward the principled stewardship of cognitive provenance, ensuring that intellectual responsibility remains human-bound even as generative capacity is distributed. The author anticipates that further work will concern organisational and governance architecture, rather than software implementation, and would undertake such work collaboratively and without charge where institutions actively seek to explore these models. Disclaimer This record is released as a working-paper bundle and methodological simulation. The Cognitive Contribution Rating (CCR) framework, assessment sheets, and accompanying artefacts are descriptive and exploratory, not prescriptive, normative, or authoritative. They are not intended to certify authorship, assign credit, detect misconduct, or replace human academic judgement. No claim is made that cognitive contribution can be fully verified, quantified, or automated. The assessment materials may be used by individuals or groups for self-reflection, peer discussion, or exploratory evaluation, but any conclusions drawn remain provisional and contextual. Responsibility for interpretation, acceptance, or rejection of claims rests entirely with human assessors. Nothing in this record constitutes institutional guidance, accreditation, or endorsement, nor should it be used as the basis for disciplinary action or exclusion. The framework is offered to reduce fear, not to create new forms of surveillance or compliance. In particular, the materials may be used privately by authors to reflect on their own human–AI working practices prior to submission or publication, without obligation to disclose such reflections unless they choose to do so. This research is produced independently under the Drive-In s.r.o. research programme.Readers who wish to support its continuation may do so here: https://ko-fi.com/johnryder99892

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AIExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen