Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Cloud or On-Premise? A Strategic View of Large Language Model Deployment
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Large language models (LLMs) have advanced rapidly in recent years. We examine a critical decision faced by an LLM provider: whether to provide a local (on-premise) service channel in addition to cloud services. We develop a game-theoretical queueing model to analyze the economic and welfare implications of introducing an on-premise model. Our results show that offering the localization option can reduce the provider's optimal profit due to market cannibalization, yet increase users' overall surplus. Such market outcomes can be reinforced by users' privacy concerns, but may reverse when users differ significantly in their service valuations, as localization enables the provider to extract users' surplus more effectively. When localization is offered through a third party, price discrimination can further increase surplus extraction; however, the double marginalization along the AI supply chain may offset these gains. Finally, in competitive markets, localization may prompt an entrant to lower the quality of their cloud services to limit cannibalization, thereby softening price competition with the incumbent to some extent. Overall, our analysis highlights the strategic trade-offs in LLM deployment and provides guidance on pricing and localization decisions.
Ähnliche Arbeiten
Federated Learning: Challenges, Methods, and Future Directions
2020 · 4.411 Zit.
Deep Learning: Methods and Applications
2014 · 3.312 Zit.
Mobile Edge Computing: A Survey on Architecture and Computation Offloading
2017 · 2.904 Zit.
Machine Learning: An Artificial Intelligence Approach
2013 · 2.639 Zit.
Machine learning and deep learning
2021 · 2.342 Zit.