
Human Risk Management, Human Risk Management
The 19-day training gap in financial services cybersecurity
At 5:12 pm, a treasury analyst receives what looks like a normal payment request. The project name matches. The amount does not seem unusual. A senior executive appears to confirm that it needs urgent action. Nothing in the email chain remotely suggests fraud. By the time someone stops to question whether the request is genuine, the payment may already be in motion.
According to SoSafe’s recent study, 67% of surveyed security professionals reported an increase in the number of AI-engineered attacks over the last 12 months and 71% reported an increased scope of AI attacks.
The same study found out that 19 days is the average time organisations take to update overall defences, refresh the guidance and training employees rely on, and measure whether the update actually changed behaviour after a new threat appears.
The study draws on responses from security leaders across nine European countries and a wide mix of sectors, with finance and banking making up the largest share at 48%. In financial services, a 19-day delay is dangerous because attackers can reuse successful tactics while those defences are still catching up.
TL;DR
This article explains why the 19-day gap in updating cybersecurity awareness training can be make or break for financial services, where it slows an organisation’s ability to adapt to new fraud tactics, and how finance teams can close that gap through adaptive human risk management.
Why financial services is a prime target for AI-driven attacks
The financial services industry is attractive to attackers because routine work can lead to high-value results. Everyday tasks such as approving payments, changing access, updating vendor details, handling client requests, or responding to urgent executive messages can all move money, data, or permissions. On the surface, these actions look normal. That is exactly why attackers try to hide inside them.
According to SoSafe’s State of AI and Social Engineering Report, 57% of leaders had encountered AI-generated fake business documents, such as invoices, policies, and contracts, in 2025.

The European financial sector faced 488 publicly reported incidents between January 2023 and June 2024.
Social engineering campaigns, including phishing, smishing and vishing, were prevalent tactics used by cybercrime threat actors.
Source: ENISA’s finance-sector threat landscape
You cannot add friction to every approval, callback, or client request without slowing down service.
That pressure increases with deepfake voice and video. A convincing voice note or video clip can make a request feel trustworthy just as someone is deciding whether to approve a payment, share data, or move a task forward.
The problem is not employee carelessness. It is that the request can look and sound credible in a workflow where quick action is expected. And when defences take 19 days to catch up, that window stays open longer than it should.
Where the 19-day delay builds in the finance industry
Payment and treasury approvals
EBA Consumer Trends Report 2024/25 points to increasingly complex social engineering fraud used to persuade people to authorise payments. In payment and treasury teams, the delay often starts after the first suspicious request is spotted. One team may notice something unusual, but before that becomes useful across the organisation, it usually has to be checked, escalated, and turned into guidance that other employees can act on.
Vendor and invoice changes
Vendor fraud creates delay for a different reason. The signal is often spread across accounts payable, procurement, supplier management, and business owners, so the organisation is slower to recognise that a routine request is part of a wider pattern. That is a problem in a sector where the EBA’s operational risks and resilience report says fraud risk has grown significantly in the last two years.
Executive and client-facing escalations
The longest delay often appears in requests tied to authority. A spoofed executive message or a pressured client escalation is harder to turn into useful training reinforcement because the lesson is rarely generic. Teams need to show employees what changed in the wording, the approval path, or the exception process. That takes time. Attackers do not wait.
Under the Digital Operational Resilience Act (DORA), financial entities are expected to manage digital operational resilience in a more structured way, which raises the bar for how changes are reviewed and evidenced.
The 19-day gap is usually lost in that space between one incident and organisation-wide reinforcement. In financial services cybersecurity, the issue is rarely a lack of controls. It is the time it takes to turn a live fraud signal into something treasury, accounts payable, and client-facing teams can act on before the same tactic is used again.
DORA raises the bar, but speed still decides outcomes
The Digital Operational Resilience Act (DORA) has applied since 17 January 2025 and sets requirements around information and communication technology risk management, testing, and third-party risk for financial entities across the EU. That means the burden of proof is higher. Leaders need to show more than policy coverage. They need evidence that the organisation can adapt in practice.That is why policy sign-off is weak evidence on its own. Acknowledgements are useful records, but they do not show whether people can recognise and respond to current fraud tactics inside live workflows. The European Central Bank’s cyber resilience oversight expectations support a more operational view of resilience, where practical effectiveness matters more than a paper trail alone.
The reporting silence problem in regulated environments
The gap often starts earlier than leaders expect. It starts when employees notice something odd and say nothing. In financial services, that silence is especially costly because the first useful warning often comes from the workforce before a technical system has enough evidence to classify the attack.
If those signals are not reported, the organisation loses visibility into real exposure.That is where adaptive human risk management becomes useful as an operating model. It helps connect behaviour-linked exposure to measurable action, instead of treating each near miss as an isolated mistake.
See how security leaders are benchmarking adaptation speed and behavioural readiness against AI-driven manipulation.
Read the full report
How financial services teams can close the 19-day cybersecurity training gap
The most useful next step is to find where the delay actually begins. Does it start at detection, where new fraud patterns are noticed but not captured quickly enough? Does it begin in internal review, where approvals slow the update cycle? Or does it happen at rollout, where the right teams receive the right guidance too late to matter?
Then look at which workflows carry the highest consequence. Payment approvals, vendor changes, executive escalations, and client-facing requests often look routine until they fail. Those are the places where a more adaptive loop earns its value.
First, detect new tactics early. Then mirror them quickly in relevant simulations and guidance. Intervene where judgement carries the highest downstream risk. Finally, measure whether behaviour is improving fast enough to reduce exposure.

SoSafe’s Threat Inbox helps teams turn employee reporting into a live detection layer. Recreate Attack helps them recreate current phishing patterns quickly, so simulations stay close to what employees are actually seeing. Human Risk OS™ then helps leaders track whether reporting, judgement, and workforce risk are improving over time.
Close the FinServ resilience gap
See how to turn reporting, simulation, and behavioural insight into a faster defence loop for high-risk financial workflows.












