OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 21.03.2026, 06:07

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

User Experience with On-Premise Large Language Models in a German University Medicine: Insights from a Survey-Based Evaluation (Preprint)

2025·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2025

Jahr

Abstract

<sec> <title>BACKGROUND</title> Large language models are increasingly used by employees at university hospitals for information retrieval or decision support. Self-hosted on-premise systems provide a secure environment, conform with data privacy and security regulations for handling sensitive personal data. By automation of standard procedures through LLM application, time-consuming administrative tasks can be drastically reduced and analysis of large data sets facilitated. </sec> <sec> <title>OBJECTIVE</title> The objective of our study was to gather feedback from registered AI users on the usability and common use cases of the on-premise LLM infrastructure we established at the University Medicine Magdeburg, in order to optimize the models to the needs at our facility. </sec> <sec> <title>METHODS</title> We developed an online questionnaire to which registered AI users were given access and were informed via email. </sec> <sec> <title>RESULTS</title> Of 322 registered AI users, 98 participated in the user survey. Filtering incomplete responses, results from 91 participants remained for further analysis. Speed and quality received overall high approval rates. A majority of users utilized the platform at least once per week. Forty-four percent reported to save at least 30 minutes of work per week by using our AI platform. A diverse set of use cases could be observed, dependent on the users’ professions, e.g. healthcare and research professionals using the AI platform much more often for tasks of creation or analysis compared to administrative staff. </sec> <sec> <title>CONCLUSIONS</title> Our data indicates that implementation of a self-hosted on-premise LLM has positive influence on the diverse group of professionals working at a university hospital, saving time and meeting their individual needs. </sec>

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationElectronic Health Records SystemsEthics and Social Impacts of AI
Volltext beim Verlag öffnen