OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 07:49

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

AI-Powered Social Engineering: Emerging Attack Vectors, Vulnerabilities, and Multi-Layered Defense Strategies

2026·0 Zitationen·ComputersOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2026

Jahr

Abstract

In the past decade, a growing number of cyberattacks have been reported, enabling unprecedented levels of personalization, automation, and deception. For instance, recent industry surveys have reported sharp increases in unique social engineering attacks within a single month of 2023, coinciding with the public release of ChatGPT-3.5. This trend highlights how Artificial Intelligence (AI)-powered phishing campaigns have become a significant threat to digital ecosystems. The present study provides an integrative analysis of how generative and deepfake technologies have reshaped the landscape of a Social Engineering (SE) attack, categorizing the main attack strategies and examining their psychological, technological, and ethical implications. In addition, to reviewing enabling technologies, our study conducts a comparative analysis of frameworks and analytical models across technical, empirical, and quantitative perspectives that model AI-driven SE operations and their defensive countermeasures. The convergence of these frameworks reveals three core capabilities—realism, personalization, and automation—that systematically amplify attack efficiency. Building on these insights, the study proposes the Unified Model for AI-Driven Social Engineering (UM-AISE), a conceptual framework that integrates these dimensions across the attack lifecycle and employs a theoretical Markov Decision Process (MDP) analysis. This formalization demonstrates how these capabilities can shift the attacker’s optimal strategy, offering a formal economic perspective distinct from empirical validation. Finally, the study discusses emerging ethical and regulatory challenges associated with AI-mediated deception, highlighting risks related to opacity, accountability, and large-scale manipulation. Taken together, these elements inform evolving approaches for detection, defense, and governance relevant to researchers, policymakers, and practitioners.

Ähnliche Arbeiten