Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Designing for Explainability and Data Sovereignty: A Design Principles Approach for LLM-Augmented FinTech Analytics
0
Zitationen
5
Autoren
2025
Jahr
Abstract
This study reports on the design and development of a practical analytics system that responds to the growing need for data-driven work among users without formal training in programming or data science. The system uses large language models (LLMs) to support natural language interaction and to guide users through common data analysis tasks. Compared with typical analytics tools, the system does more than simply run models in the background. It explains in plain language what each model is doing and why particular results appear, and it walks the user through the choice of methods step by step. The architecture can connect to different locally deployed LLMs – for example LLaMA, Qwen or DeepSeek, so organisations are not locked into a single provider. All interaction takes place through a chat-style interface: users upload a dataset, describe their question, and the system handles the configuration and code. The artefact was shaped through a Design Science Research (DSR) process, with several iterations of design, feedback and revision involving potential users. In its current form, a proof-of-concept implementation and scenario-based examples show that non-technical users are able to understand their data more clearly and make more informed choices among analytical options. Taken together, these features point to a practical and adaptable framework that brings explainable, LLM-supported analytics within reach of a much wider group of professionals.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.246 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.228 Zit.
"Why Should I Trust You?"
2016 · 14.150 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.091 Zit.