Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Review on Attacks against Artificial Intelligence (AI) and Their Defence Image Recognition and Generation Machine Learning, Artificial Intelligence
13
Zitationen
3
Autoren
2024
Jahr
Abstract
The main objective this paper is to review the adversarial assaults, data poisoning, model inversion attacks, and other methods that potentially jeopardize the integrity and dependability of AI-based image recognition and generation models. As artificial intelligence (AI) systems become more popular in numerous sectors, their vulnerability to attacks has arisen as a major worry. We focus on attacks especially targeting AI models used in picture identification and creation tasks in our review study. We investigate the wide range of assault strategies, including both traditional and more complex techniques. These attacks take use of flaws in machine learning algorithms, frequently resulting in misclassification, falsified picture production, or unauthorized access to sensitive data. We survey numerous defense strategies developed by scholars and practitioners to overcome these difficulties. Among these defenses are adversarial training, robust feature extraction, input sanitization, and model distillation. We explore the usefulness and limitations of each protection mechanism, highlighting the importance of a comprehensive approach that integrates numerous techniques to improve the resilience of AI models. Furthermore, we investigate the possible impact of these attacks on real-world applications such as driverless vehicles, medical imaging systems, and security monitoring, emphasizing the threats to public safety and privacy. The study also covers the legislative and ethical aspects surrounding AI security, as well as the responsibilities of AI developers in establishing adequate defense measures. This analysis highlights the critical need for continued research and collaboration to develop more secure AI systems that can withstand sophisticated attacks. As AI evolves and integrates into important areas, a concerted effort must be made to strengthen these systems' resilience against hostile threats and assure their responsible deployment for the benefit of society.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.356 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.448 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.339 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.314 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.503 Zit.