Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Experts fail to reliably detect AI-generated histological data
2
Zitationen
6
Autoren
2024
Jahr
Abstract
Abstract AI-based methods to generate images have seen unprecedented advances in recent years challenging both image forensic and human perceptual capabilities. Accordingly, they are expected to play an increasingly important role in the fraudulent fabrication of data. This includes images with complicated intrinsic structures like histological tissue samples, which are harder to forge manually. We use stable diffusion, one of the most recent generative algorithms, to create such a set of artificial histological samples and in a large study with over 800 participants, we study the ability of human subjects to discriminate between such artificial and genuine histological images. Although they perform better than naive participants, we find that even experts fail to reliably identify fabricated data. While participant performance depends on the amount of training data used, even low quantities result in convincing images, necessitating methods to detect fabricated data and technical standards such as C2PA to secure data integrity.
Ähnliche Arbeiten
A survey on deep learning in medical image analysis
2017 · 13.526 Zit.
Dermatologist-level classification of skin cancer with deep neural networks
2017 · 13.148 Zit.
A survey on Image Data Augmentation for Deep Learning
2019 · 11.758 Zit.
QuPath: Open source software for digital pathology image analysis
2017 · 8.122 Zit.
Radiomics: Images Are More than Pictures, They Are Data
2015 · 7.991 Zit.