OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 06.04.2026, 05:31

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Cloud or On-Premise? A Strategic View of Large Language Model Deployment

2025·0 Zitationen·ScholarSpace (University of Hawaii at Manoa)
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

Large language models (LLMs) have advanced rapidly in recent years. We examine a critical decision faced by an LLM provider: whether to provide a local (on-premise) service channel in addition to cloud services. We develop a game-theoretical queueing model to analyze the economic and welfare implications of introducing an on-premise model. Our results show that offering the localization option can reduce the provider's optimal profit due to market cannibalization, yet increase users' overall surplus. Such market outcomes can be reinforced by users' privacy concerns, but may reverse when users differ significantly in their service valuations, as localization enables the provider to extract users' surplus more effectively. When localization is offered through a third party, price discrimination can further increase surplus extraction; however, the double marginalization along the AI supply chain may offset these gains. Finally, in competitive markets, localization may prompt an entrant to lower the quality of their cloud services to limit cannibalization, thereby softening price competition with the incumbent to some extent. Overall, our analysis highlights the strategic trade-offs in LLM deployment and provides guidance on pricing and localization decisions.

Ähnliche Arbeiten

Autoren

Themen

Big Data and Digital EconomyArtificial Intelligence in Healthcare and EducationDigital Platforms and Economics
Volltext beim Verlag öffnen