
Human Risk Management
AI-driven social engineering attacks: 2025 trends
AI social engineering attacks have become more realistic and complex. They blend cognitive, social and technical methods that align with human behaviour. Attackers analyse how people make decisions and manipulate these cues to seek vulnerabilities. SoSafe’s State of Social Engineering Survey 2025 cites an example of attackers targeting private cell phones using fake WhatsApp accounts to elicit company information, for instance. This article explores the developments in 2025, how attacks are coordinated and how to protect your business in future.
TL;DR
AI-driven social engineering attacks now imitate how your organisation communicates and operates, across email, documents, messaging apps, and identity systems, which is why verification has become the new baseline.
What “AI-driven” means now: realistic content plus believable context
Attackers are using AI social engineering that blends three fast-developing capabilities:
- Realistic content generation
This content emulates human communication across numerous channels, including text, images, voice and video. - Advanced personal targeting
Personalisation that tailors pretexts to specific employee roles and activities. - Automated attack infrastructure
Rapid outreach scalability that can adapt to responses and circumstances in real time.
This social engineering feels intuitively trustworthy. Our survey shows that 87% of security leaders observed an increase in AI-based social engineering attacks in the past 24 months. Furthermore, 83% of these leaders experienced at least one such attack in 2025.
Main attack formats:
| Type | 2025 | 2024 | % Increase |
| AI-generated phishing emails | 79% | 35% | 44% |
| AI-generated business documents | 57% | – | 57% |
| Voice cloning attempts | 30% | 16% | 14% |
| Deepfake video incidents | 23% | 7% | 16% |
The methods can be very believable. One email spoof so accurately mimicked a company executive that even they initially believed they had written it.
Definition: AI-driven social engineering attacks use AI to create or manipulate realistic text, documents, voice, video, or cloned sites to make impersonation and phishing harder to detect. Include one external link.
Why attackers win: they imitate workflows, then follow up across channels
The current threat landscape pairs content and context with process imitation and persistence. Even though 56% of surveyed leaders reported an increase in generic phishing, AI attacks go much further. They imitate workflows, impersonate roles and coordinate multi-channel activities to breach defences.
Workflow imitation
AI social engineering uses coordinated campaigns aimed at workflows. Last year, 33% of business leaders reported an increase in attempts to imitate business processes, including workflow approvals and payroll changes. Additionally, 51% reported role-targeted messages aimed at financial and HR job functions.
Multi-channel attacks
Multi-channel attacks combine a variety of methods, including email, phone calls, SMS and voice cloning. They switch between these channels to create believable scenarios. In the last 24 months, 28% of business leaders reported an increase in multi-channel intrusion attempts.
Persistence
Attackers are now more persistent. In 2025, 46% of targeted individuals received follow-up emails. A further 30% reported continual, deliberately sequenced multi-chain tactics.
Real-world scenarios
In 2025, 33% of organisations saw an increase in attempts at influencing internal behaviour and 47% reported more single-message attempts. Here’s a summary table of these attacks.
| What the message mimics | What the attacker wants | Real-world example |
| Payroll or HR issue | Quick action, credential entry | Payroll problem email framed as an HR issue. |
| Vendor or invoicing workflow | Payment or bank detail changes | A fake vendor or invoice request. |
| IT support process | Remote access or multi-factor authentication (MFA) changes | Email requesting an employee call a named IT provider, followed by an attempt to remotely control the user’s computer. |
| Supplier document sharing | A click on the phishing link and credential capture | A compromised PDF was inserted into the company’s SharePoint system. |
| Supply chain workflow | Credentials and further phishing from compromised accounts | Fake emails imitating existing suppliers. |
| Internal event or campaign | Credential theft | Employees spoofed by email to provide their credentials for a fake internal campaign. |
Where attacks show up: personal phones, social platforms, and trusted services
Cyber attacks now specifically target employees in personal ways that are easiest to create pressure, such as private phones, social media and instant messaging apps.
Thus, the European Union Agency for Cybersecurity (ENISA) Threat Landscape report identifies an entirely different threat canvas as a result of these multi-channel social engineering campaigns. This significantly widens the scope of cybersecurity.
The numbers from SoSafe’s 2025 survey support this:
- 71% of business leaders found fake executive profiles of themselves.
- 67% of employees had incidents involving private social media accounts.
- 53% of employees reported attack attempts on personal phones.
- 49% reported attacks aimed at personal email accounts.
Real-world examples exist across numerous channels:
- Corporate email spoofing.
- Fake WhatsApp accounts impersonating managers.
- CEO fraud, with deepfakes used to extract sensitive information.
- LinkedIn outreach, offering paid interviews designed to extract business data.
- Hacking Dropbox and other cloud sharing services to harvest information.
Identity is the prize: tokens, MFA, and cloned login portals
AI attacks focus on facets like access credentials, session tokens and MFA changes. Attackers can then pose as genuine users. In 2025, the most common patterns observed were those that bypass controls, infiltrate business channels, and move money. The statistics are eye-opening:
- 71% observed BEC invoice conversion.
- 66% observed smishing.
- 58% observed collaboration app phishing.
- 38% observed payroll-related attacks.
- 38% observed vishing.
- 31% observed MFA or OTP-related attacks.
- 31% observed MSP-related attacks.
Other examples include copied authentication tokens, ID token theft via website-embedded proxy and registration of false MFA methods.
Another involved a credential harvesting attack by cloning a site in the Citrix DaaS platform and sending phishing emails with a link to a false Citrix login page to try and capture user details.
Practical takeaway: If a request involves logins, MFA changes, remote access, or payments, treat it as identity-risk until verified out of band.
How detection really happens: people plus tools, under tool constraints
In the face of AI social engineering attacks, a dual detection approach is required, combining human reporting with automated blocks like intrusion prevention systems, endpoint detection and response, and network detection and response. Tool balance is important. Too few tools can lead to security blind spots. Too many creates tool sprawl that overloads your system.
Companies were confident in the detection abilities in 2025. Our survey shows that 39% rated their ability to detect AI-driven attacks as high or very high. In these cases, automated cybersecurity tools intercepted 65% of threats, with 32% attributed to employees reporting suspicious activity.
In our survey, 74% of respondents rated their toolset as balanced, providing sufficient protection with a manageable number of signals. Conversely, 14% said they experience overload due to too many tools, while 12% believed that they have too few tools.
Practical examples of the people plus tools approach include WhatsApp impersonation attempts that were quickly reported. This illustrates how fast reporting can mitigate harm. On the other hand, the CEO spoofing example challenges human judgement, as the deepfake and context are so realistic.
The takeaway: technology is doing most of the blocking, but people still decide whether unusual, believable requests get escalated.
What the next maturity milestone should be
The next stage is adaptive human defence: role-specific, dynamic training informed by real-world signals, delivered in short cycles, and designed to keep pace with changing attack patterns.
The emphasis should be on intent verification, identity visibility, and multi-channel readiness. The 87% reported increase in AI social engineering attempts in 2025 underlines the importance of this.
Here’s how the various elements contribute to overall threat protection:
- Faster detection of threat patterns: 4.2 / 5
- Adaptive, personalised learning: 3.8 / 5
- Better integration of human risk data into the security stack: 3.7 / 5
- Realistic, real-time simulations: 3.6 / 5
- Benchmarking: 3.1 / 5
Download The State of Social Engineering
Learn where even mature organisations still struggle, based on 2025 data, and what the next maturity milestone should be.
FAQs
RESEARCH LOG
| Source | Why used | Appears in |
| SoSafe’s State of AI and Social Engineering Survey 2025 | WhatsApp attack example | Introduction |
| SoSafe’s State of AI and Social Engineering Survey 2025 | Social engineering capabilities bullet list | First H2 section |
| SoSafe’s State of AI and Social Engineering Survey 2025 | AI attack statistics | First H2 section |
| SoSafe’s State of AI and Social Engineering Survey 2025 | Statistics | Second H2 section |
| SoSafe’s State of AI and Social Engineering Survey 2025 | Statistics, examples | Third H2 section |
| SoSafe’s State of AI and Social Engineering Survey 2025 | Statistics, examples | Fourth H2 section |
| SoSafe’s State of AI and Social Engineering Survey 2025 | Statistics, examples | Fifth H2 section |
| SoSafe’s State of AI and Social Engineering Survey 2025 | Statistics, examples | Sixth H2 section |












