OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 28.04.2026, 06:36

Helmholtz Center for Information Security

3.963 Arbeiten55.551 Zitationen
Land: DETyp: facility

Relevante Arbeiten

Meistzitierte Publikationen im Bereich Gesundheit & MedTech

Swarm Learning for decentralized and confidential clinical machine learning

Stefanie Warnat‐Herresthal, Hartmut Schultze, Krishnaprasad Lingadahalli Shastry et al.

2021 · 817 Zit.

Membership Leakage in Label-Only Exposures

Zheng Li, Yang Zhang

2021 · 185 Zit.

Comments on the “Draft Ethics Guidelines for Trustworthy AI” by the High-Level Expert Group on Artificial Intelligence.

Ninja Marnau

2019 · 48 Zit.

Using AI Assistants in Software Development: A Qualitative Study on Security Practices and Concerns

Jan H. Klemmer, Stefan Albert Horstmann, Nikhil Patnaik et al.

2024 · 17 Zit.

LLMs and Stack Overflow discussions: Reliability, impact, and challenges

Léuson Da Silva, Jordan Samhi, Foutse Khomh

2025 · 5 Zit.

Reconstruct Your Previous Conversations! Comprehensively Investigating Privacy Leakage Risks in Conversations with GPT Models

Junjie Chu, Zeyang Sha, Michael Backes et al.

2024 · 3 Zit.

"That's another doom I haven't thought about": A User Study on AI Labels as a Safeguard Against Image-Based Misinformation

Sandra Höltervennhoff, Jonas Ricker, Maike M. Raphael et al.

2026 · 1 Zit.

Inside the Black Box: Detecting Data Leakage in Pre-Trained Language Encoders

Xin Yuan, Zheng Li, Ning Yu et al.

2024 · 1 Zit.

The science and practice of proportionality in AI risk evaluations

Carlos Mougán, Lauritz Morlock, Jair Aguirre et al.

2026 · 0 Zit.

Report on the 9th Workshop on Search-Oriented Conversational Artificial Intelligence (SCAI 2025) at IJCAI 2025

Svitlana Vakulenko, Philipp Christmann, Isabel Feustel et al.

2025 · 0 Zit.

Inspectable AI for Science: A Research Object Approach to Generative AI Governance

Ruta Binkyte, Sharif Abuaddba, Chamikara Mahawaga et al.

2026 · 0 Zit.

Robustness Over Time: Understanding Adversarial Examples’ Effectiveness on Longitudinal Versions of Large Language Models

Yugeng Liu, Tianshuo Cong, Zhengyu Zhao et al.

2026 · 0 Zit.

Robustness Over Time: Understanding Adversarial Examples’ Effectiveness on Longitudinal Versions of Large Language Models

Yugeng Liu, Tianshuo Cong, Zhengyu Zhao et al.

2026 · 0 Zit.

Robustness Over Time: Understanding Adversarial Examples’ Effectiveness on Longitudinal Versions of Large Language Models

Yugeng Liu, Tianshuo Cong, Zhengyu Zhao et al.

2026 · 0 Zit.

A Systematic Study of Training-Free Methods for Trustworthy Large Language Models

Wai Man Si, Mingjie Li, Michael Backes et al.

2026 · 0 Zit.