Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Utilization of Large Language Models for conformity assessment: Chances, Threats, and Mitigations
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Assessing the conformity of software in measurement instruments is a laborious process and a major bottleneck in the process of developing new devices.Large Language Models have been shown to effectively handle complex tasks and have the ability to surpass humans with regard to speed and accuracy.However, integrating them into the technology stack can bring major security and privacy risks.This position paper performs a threat modeling in this context.By addressing the discovered confidentiality risks the paper draws a way for safely implementing Large Language Models as an essential tool in the process of conformity assessment.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.395 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.872 Zit.
Deep Learning with Differential Privacy
2016 · 5.595 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.591 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.564 Zit.