Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainability and Trust in Generative AI–Driven Customer Workflows: Methods for Responsible Enterprise Adoption
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Generative artificial intelligence is increasingly embedded within enterprise customer relationship management workflows to automate communication, summarize interaction histories, and support consequential business decisions. While these capabilities deliver substantial productivity benefits, the opaque reasoning processes of large language models introduce significant risks related to trust, accountability, and regulatory compliance in high-stakes operational contexts such as sales forecasting, customer support, and contractual negotiations. Existing explainable AI literature has concentrated predominantly on predictive systems, leaving a methodological gap for organizations seeking to deploy generative AI responsibly in business-critical environments. This article proposes a comprehensive framework for explainability and trust in generative AI–driven enterprise customer workflows, introducing multi-level technical mechanisms including prompt lineage tracking, decision rationale generation, confidence scoring, and human-verifiable evidence extraction to render generative outputs auditable and interpretable at operational scale. A risk-stratified trust taxonomy is developed to classify workflow actions by consequence severity and required oversight, enabling adaptive human-in-the-loop intervention proportionate to operational risk. The framework further incorporates bias monitoring, hallucination detection, and immutable audit logging to support ethical and compliant operations within enterprise software infrastructure. Integration is demonstrated within a Salesforce-based CRM environment through a secure model gateway and policy enforcement architecture. Experimental deployment in an enterprise customer service context confirms that explanation provision improves user trust calibration, reduces escalation frequency, and decreases response rework compared to opaque automation conditions. Compliance maintenance is validated through traceable execution records satisfying enterprise data governance audit requirements. The article establishes one of the earliest systematic treatments of explainability designed specifically for generative AI in enterprise software, offering actionable technical and governance guidance for organizations pursuing trustworthy automation in consequential customer workflow contexts. Future directions address framework scalability, cross-platform generalization, and alignment with evolving regulatory compliance obligations under instruments including the EU AI Act.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.464 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.259 Zit.
"Why Should I Trust You?"
2016 · 14.315 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.138 Zit.