Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Detecting Role Impersonation in AI-Generated Clinical Oncology Text Using Knowledge Graphs
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Large language models (LLMs) are increasingly used in oncology for drafting radiology reports, pathology summaries, tumor-board notes, and clinical trial documentation. While their fluency and contextual adaptability enable workflow acceleration, such linguistic proficiency can conceal a critical risk: impersonation of oncologists or cancer researchers by mimicking professional tone without adhering to oncologic reasoning or procedural logic. This work presents a framework for detecting domain-level role impersonation in AI-generated oncology communication. Central to our method is the role impersonation index (RII), a composite metric that quantifies semantic fidelity and procedural coherence (PC) by aligning generated content with expert knowledge encoded in structured ontologies such as SNOMED CT, UMLS, and cancer-specific resources, including NCIt and OncoTree. Evaluated on 4,800 clinical oncology notes, balanced between human and AI authorship, our framework identifies inconsistencies in tumor terminology, diagnostic staging, biomarker logic, and therapeutic sequencing without relying on superficial linguistic cues, achieving an $F1$ -score of 94% and an AUROC of 0.96. By embedding cancer-domain semantics and established clinical workflows into the detection process, the proposed approach provides a principled safeguard for the trustworthy integration of generative AI in oncology practice. To our knowledge, this is the first framework that integrates knowledge-graph alignment with procedural logic scoring to detect role impersonation in cancer-related clinical documentation.