OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 31.03.2026, 14:43

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainability and Trust in Generative AI–Driven Customer Workflows: Methods for Responsible Enterprise Adoption

2026·0 Zitationen·International Journal of Computational and Experimental Science and EngineeringOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

Generative artificial intelligence is increasingly embedded within enterprise customer relationship management workflows to automate communication, summarize interaction histories, and support consequential business decisions. While these capabilities deliver substantial productivity benefits, the opaque reasoning processes of large language models introduce significant risks related to trust, accountability, and regulatory compliance in high-stakes operational contexts such as sales forecasting, customer support, and contractual negotiations. Existing explainable AI literature has concentrated predominantly on predictive systems, leaving a methodological gap for organizations seeking to deploy generative AI responsibly in business-critical environments. This article proposes a comprehensive framework for explainability and trust in generative AI–driven enterprise customer workflows, introducing multi-level technical mechanisms including prompt lineage tracking, decision rationale generation, confidence scoring, and human-verifiable evidence extraction to render generative outputs auditable and interpretable at operational scale. A risk-stratified trust taxonomy is developed to classify workflow actions by consequence severity and required oversight, enabling adaptive human-in-the-loop intervention proportionate to operational risk. The framework further incorporates bias monitoring, hallucination detection, and immutable audit logging to support ethical and compliant operations within enterprise software infrastructure. Integration is demonstrated within a Salesforce-based CRM environment through a secure model gateway and policy enforcement architecture. Experimental deployment in an enterprise customer service context confirms that explanation provision improves user trust calibration, reduces escalation frequency, and decreases response rework compared to opaque automation conditions. Compliance maintenance is validated through traceable execution records satisfying enterprise data governance audit requirements. The article establishes one of the earliest systematic treatments of explainability designed specifically for generative AI in enterprise software, offering actionable technical and governance guidance for organizations pursuing trustworthy automation in consequential customer workflow contexts. Future directions address framework scalability, cross-platform generalization, and alignment with evolving regulatory compliance obligations under instruments including the EU AI Act.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen