Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Laboratory-Scale AI: Open-Weight Models are Competitive with ChatGPT Even in Low-Resource Settings
9
Zitationen
11
Autoren
2024
Jahr
Abstract
The rapid proliferation of generative AI has raised questions about the competitiveness of lower-parameter, locally tunable, open-weight models relative to high-parameter, API-guarded, closed-weight models in terms of performance, domain adaptation, cost, and generalization. Centering under-resourced yet risk-intolerant settings in government, research, and healthcare, we see for-profit closed-weight models as incompatible with requirements for transparency, privacy, adaptability, and standards of evidence. Yet the performance penalty in using open-weight models, especially in low-data and low-resource settings, is unclear.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.423 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.926 Zit.
Deep Learning with Differential Privacy
2016 · 5.659 Zit.
Federated Machine Learning
2019 · 5.635 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.602 Zit.