Human Risk Management

AI-driven social engineering attacks: 2025 trends

10 February 2026 · 6 min read

AI social engineering attacks have become more realistic and complex. They blend cognitive, social and technical methods that align with human behaviour. Attackers analyse how people make decisions and manipulate these cues to seek vulnerabilities. SoSafe’s State of Social Engineering Survey 2025 cites an example of attackers targeting private cell phones using fake WhatsApp accounts to elicit company information, for instance. This article explores the developments in 2025, how attacks are coordinated and how to protect your business in future.

TL;DR

AI-driven social engineering attacks now imitate how your organisation communicates and operates, across email, documents, messaging apps, and identity systems, which is why verification has become the new baseline.

What “AI-driven” means now: realistic content plus believable context

Attackers are using AI social engineering that blends three fast-developing capabilities:

  • Realistic content generation
    This content emulates human communication across numerous channels, including text, images, voice and video.
  • Advanced personal targeting
    Personalisation that tailors pretexts to specific employee roles and activities.
  • Automated attack infrastructure

Rapid outreach scalability that can adapt to responses and circumstances in real time.

This social engineering feels intuitively trustworthy. Our survey shows that 87% of security leaders observed an increase in AI-based social engineering attacks in the past 24 months. Furthermore, 83% of these leaders experienced at least one such attack in 2025.

Main attack formats:

Type20252024% Increase
AI-generated phishing emails79%35%44%
AI-generated business documents57%57%
Voice cloning attempts30%16%14%
Deepfake video incidents23%7%16%

The methods can be very believable. One email spoof so accurately mimicked a company executive that even they initially believed they had written it.

Definition: AI-driven social engineering attacks use AI to create or manipulate realistic text, documents, voice, video, or cloned sites to make impersonation and phishing harder to detect. Include one external link.

Why attackers win: they imitate workflows, then follow up across channels

The current threat landscape pairs content and context with process imitation and persistence. Even though 56% of surveyed leaders reported an increase in generic phishing, AI attacks go much further. They imitate workflows, impersonate roles and coordinate multi-channel activities to breach defences.

Workflow imitation

AI social engineering uses coordinated campaigns aimed at workflows. Last year, 33% of business leaders reported an increase in attempts to imitate business processes, including workflow approvals and payroll changes. Additionally, 51% reported role-targeted messages aimed at financial and HR job functions.

Multi-channel attacks

Multi-channel attacks combine a variety of methods, including email, phone calls, SMS and voice cloning. They switch between these channels to create believable scenarios. In the last 24 months, 28% of business leaders reported an increase in multi-channel intrusion attempts.

Persistence

Attackers are now more persistent. In 2025, 46% of targeted individuals received follow-up emails. A further 30% reported continual, deliberately sequenced multi-chain tactics.

Real-world scenarios

In 2025, 33% of organisations saw an increase in attempts at influencing internal behaviour and 47% reported more single-message attempts. Here’s a summary table of these attacks.

What the message mimicsWhat the attacker wantsReal-world example
Payroll or HR issueQuick action, credential entryPayroll problem email framed as an HR issue.
Vendor or invoicing workflowPayment or bank detail changesA fake vendor or invoice request.
IT support processRemote access or multi-factor authentication (MFA) changesEmail requesting an employee call a named IT provider, followed by an attempt to remotely control the user’s computer.
Supplier document sharingA click on the phishing link and credential captureA compromised PDF was inserted into the company’s SharePoint system.
Supply chain workflowCredentials and further phishing from compromised accountsFake emails imitating existing suppliers.
Internal event or campaignCredential theftEmployees spoofed by email to provide their credentials for a fake internal campaign.

Where attacks show up: personal phones, social platforms, and trusted services

Cyber attacks now specifically target employees in personal ways that are easiest to create pressure, such as private phones, social media and instant messaging apps. 

Thus, the European Union Agency for Cybersecurity (ENISA) Threat Landscape report identifies an entirely different threat canvas as a result of these multi-channel social engineering campaigns. This significantly widens the scope of cybersecurity.

The numbers from SoSafe’s 2025 survey support this:

  • 71% of business leaders found fake executive profiles of themselves.
  • 67% of employees had incidents involving private social media accounts.
  • 53% of employees reported attack attempts on personal phones.
  • 49% reported attacks aimed at personal email accounts.

Real-world examples exist across numerous channels:

  • Corporate email spoofing.
  • Fake WhatsApp accounts impersonating managers.
  • CEO fraud, with deepfakes used to extract sensitive information. 
  • LinkedIn outreach, offering paid interviews designed to extract business data.
  • Hacking Dropbox and other cloud sharing services to harvest information.

Identity is the prize: tokens, MFA, and cloned login portals

AI attacks focus on facets like access credentials, session tokens and MFA changes. Attackers can then pose as genuine users. In 2025, the most common patterns observed were those that bypass controls, infiltrate business channels, and move money. The statistics are eye-opening:

  • 71% observed BEC invoice conversion.
  • 66% observed smishing.
  • 58% observed collaboration app phishing.
  • 38% observed payroll-related attacks.
  • 38% observed vishing.
  • 31% observed MFA or OTP-related attacks.
  • 31% observed MSP-related attacks.

Other examples include copied authentication tokens, ID token theft via website-embedded proxy and registration of false MFA methods

Another involved a credential harvesting attack by cloning a site in the Citrix DaaS platform and sending phishing emails with a link to a false Citrix login page to try and capture user details.

Practical takeaway: If a request involves logins, MFA changes, remote access, or payments, treat it as identity-risk until verified out of band.

How detection really happens: people plus tools, under tool constraints

In the face of AI social engineering attacks, a dual detection approach is required, combining human reporting with automated blocks like intrusion prevention systems, endpoint detection and response, and network detection and response. Tool balance is important. Too few tools can lead to security blind spots. Too many creates tool sprawl that overloads your system.  

Companies were confident in the detection abilities in 2025. Our survey shows that 39% rated their ability to detect AI-driven attacks as high or very high. In these cases, automated cybersecurity tools intercepted 65% of threats, with 32% attributed to employees reporting suspicious activity.

In our survey, 74% of respondents rated their toolset as balanced, providing sufficient protection with a manageable number of signals. Conversely, 14% said they experience overload due to too many tools, while 12% believed that they have too few tools.

Practical examples of the people plus tools approach include WhatsApp impersonation attempts that were quickly reported. This illustrates how fast reporting can mitigate harm. On the other hand, the CEO spoofing example challenges human judgement, as the deepfake and context are so realistic.

The takeaway: technology is doing most of the blocking, but people still decide whether unusual, believable requests get escalated.

What the next maturity milestone should be

The next stage is adaptive human defence: role-specific, dynamic training informed by real-world signals, delivered in short cycles, and designed to keep pace with changing attack patterns.

The emphasis should be on intent verification, identity visibility, and multi-channel readiness. The 87% reported increase in AI social engineering attempts in 2025 underlines the importance of this. 

Here’s how the various elements contribute to overall threat protection:

  • Faster detection of threat patterns: 4.2 / 5
  • Adaptive, personalised learning: 3.8 / 5
  • Better integration of human risk data into the security stack: 3.7 / 5
  • Realistic, real-time simulations: 3.6 / 5
  • Benchmarking: 3.1 / 5

Download The State of Social Engineering

Learn where even mature organisations still struggle, based on 2025 data, and what the next maturity milestone should be.

Download the report

FAQs

The most frequently observed patterns in The State of Social Engineering include BEC invoice conversion, smishing, and collaboration app phishing, alongside payroll and voice-based tactics like vishing. These patterns aim to bypass controls and access personal employee channels, company workflows, supply chains and financial assets. A typical example involves targeting employees’ private phones with fake WhatsApp accounts containing information taken from LinkedIn profiles and spoofing a legitimate conversation to gain trust and information.

Attackers use private channels to bypass corporate controls and reach employees where they are harder to protect. If a staff member is impersonated or pressured on a private device, it can lead to corporate access, false payments or data disclosure.

AI is most valued for faster detection of emerging patterns, followed by adaptive learning and better integration of human risk signals into the security stack. Operational outcomes include faster threat detection and response times, identifying new types of threats, reducing human workload and providing more accurate alerts, thus reducing overload. Enhanced user and entity behaviour analysis detects malicious activity and compromised user accounts by identifying norm deviations.

RESEARCH LOG

SourceWhy usedAppears in
SoSafe’s State of AI and Social Engineering Survey 2025WhatsApp attack exampleIntroduction
SoSafe’s State of AI and Social Engineering Survey 2025Social engineering capabilities bullet listFirst H2 section
SoSafe’s State of AI and Social Engineering Survey 2025AI attack statisticsFirst H2 section
SoSafe’s State of AI and Social Engineering Survey 2025StatisticsSecond H2 section
SoSafe’s State of AI and Social Engineering Survey 2025Statistics, examplesThird H2 section
SoSafe’s State of AI and Social Engineering Survey 2025Statistics, examplesFourth H2 section
SoSafe’s State of AI and Social Engineering Survey 2025Statistics, examplesFifth H2 section
SoSafe’s State of AI and Social Engineering Survey 2025Statistics, examplesSixth H2 section

You might also be interested in:

Do you want to stay ahead of the cyber game?

Sign up for our newsletter to receive the latest cyber security articles, events, and resources. No spam, only content that truly matters.

Newsletter visual

Experience our products first-hand

Use our online test environment to see how our platform can help you empower your team to continuously avert cyber threats and keep your organization secure.

The Forrester Wave™ Strong Performer 2024: Human Risk Management Solutions

This page is not available in English yet.

Diese Seite ist noch nicht in Ihrer Sprache verfügbar. Sie können auf Englisch fortfahren oder zur deutschen Startseite zurückkehren.

Cette page n’est pas encore disponible dans votre langue. Vous pouvez continuer en anglais ou revenir à la page d’accueil en français.

Deze pagina is nog niet beschikbaar in uw taal. U kunt doorgaan in het Engels of terugkeren naar de Nederlandse startpagina.

Esta página aún no está disponible en español. Puedes continuar en inglés o volver a la página de inicio en español.

Questa pagina non è ancora disponibile nella tua lingua. Puoi continuare in inglese oppure tornare alla home page in italiano.