
Human Risk Management
Closing the 19-day security training gap in the tech industry
At 9:14 am, a platform engineer gets a Slack message about an urgent SSO issue. The sender looks internal. The request feels plausible. The login flow looks familiar enough to trust. By the time someone flags the message, the attacker may already have a token, a session, or a foothold in a workflow that moves quickly by design.
71% of EU security professionals reported an increased scope of AI attacks, with the majority noting that attacks now encompass multi-channel strategies, multi-step attacks, deepfakes, emails, and SMS messages, as stated in the Adaptive Defense Playbook 2026.
The study also found that 19 days is the average time organisations take to update overall defences, refresh the guidance and training employees rely on, and measure whether the update actually changed behaviour after a new threat appears. In that study, technology and software made up 26% of respondents.
For the tech sector, that lag is long enough for one believable lure to move across identity, support, source control, and SaaS admin workflows before the organisation has fully caught up.
TL;DR
This article explains why a 19-day gap in updating social engineering defences is especially risky for software companies, where one new tactic can spread across identity, support, and engineering workflows before teams adapt. It also shows how security leaders can close that gap with faster reporting, role-specific reinforcement, and a more adaptive human risk management approach.
Why software companies experience this gap
Software companies give attackers something unusually useful: trusted workflows that can move access, code, data, or secrets very quickly. That includes SSO prompts, SaaS admin approvals, help desk resets, package publishing, CI workflows, and internal chat tools.
Microsoft’s Digital Defense Report 2025 says identity-based attacks rose by 32% in the first half of 2025 due to adversaries increasingly using AI, while a Verizon Report says third-party involvement in breaches doubled to 30%. In software environments, one approved login or one leaked secret can travel fast.
The tactics already fit that pattern. Okta Threat Intelligence has tracked attackers abusing Slack notifications to redirect targets to phishing proxies, while a separate Okta advisory describes help desk impersonation used to trigger password resets and enrol new MFA factors. On the engineering side, GitHub’s advisory on the tj-actions/changed-files compromise says the supply-chain attack affected more than 23,000 repositories and exposed CI/CD secrets in workflow logs. In software companies, these paths sit close to identity, admin access, and delivery pipelines.
Where software security teams lose 19 days
Ownership is split across too many joins
A new lure lands. One team sees it first. Another owns the control change. A third owns the internal message. A fourth owns the training update. In software companies, that handoff often runs across IAM, IT, platform engineering, developer experience, security operations, and awareness owners. The delay is rarely caused by one slow team. It builds in the joins. That is why one clear reporting and triage path is worth more than another disconnected awareness asset.
One incident rarely produces one lesson
The lesson for a help desk analyst is different from the lesson for a platform engineer, a workspace admin, or an open source maintainer. One group may need better caller verification. Another may need tighter judgement on OAuth scopes, CI secrets, or trusted contributors. That slows the update cycle, especially when teams still rely on broad annual awareness content.A 2023 study found that context shapes how people respond and whether they report, while a study on developer security warnings found that no single warning type works best for developers and that warning context matters. In software environments, one-size-fits-all reinforcement is a weak fit. Teams need a way to turn one live tactic into role-specific guidance while it still looks familiar.
Most teams can send a warning faster than they can prove change
This is where many programmes stall. A warning goes out. A reminder follows. Then the trail goes cold. Security leaders still need to know whether reporting improved, risky approvals dropped, or the same lure stopped working on the same teams.Research helps explain why that loop slows down. A 2024 study on phishing reporting found that poor feedback or the lack of a clear outcome was the main reason people were discouraged from reporting phishing to companies. That makes the 19-day gap a measurement problem as much as an update problem. Connected behavioural data matters here more than completion data ever will.
What to review first as AI-driven social engineering grows
The EU Cyber Resilience Act (CRA) entered into force on 10 December 2024 and requires products with digital elements to meet cybersecurity requirements across design, development, and maintenance, with reporting obligations applying from 11 September 2026.
That makes slow adaptation harder to defend: when a new social engineering tactic exposes a weak approval path, development habit, or support workflow, teams need to update both practice and proof more quickly.
Start with the workflows that can grant access or ship change within minutes. For most tech companies, that means help desk resets, IdP approvals, SaaS admin actions, CI/CD workflows, package publishing, and any path that exposes reusable secrets.
Then look at who needs separate reinforcement. Support teams, workspace admins, platform engineers, developer tooling owners, and maintainers do not need the same examples or the same guidance.
Finally, decide what proof would tell you the update worked within two weeks. Reporting speed. Repeat failures in high-agency roles. Risky approvals. Reset requests that should have been challenged. If that evidence is still scattered across tools and spreadsheets, the next delay has already started.
See how security leaders are benchmarking adaptation speed and behavioural readiness against AI-driven manipulation.
Read the full report
How software companies close the gap faster
The strongest response looks like a loop. Catch the signal early. Mirror the live tactic quickly in safe practice. Reinforce the roles where access and authority are concentrated. Then measure whether behaviour actually moved.

Software security teams need a way to centralise reporting and triage, turn real lures into safe simulations quickly, deliver short role-based reinforcement, and measure whether people are getting harder to manipulate over time. SoSafe’s adaptive human risk management approach: less manual drag, tighter alignment to live threats, and clearer proof that the organisation is learning faster.
Close the software security gap
See how to turn reporting, reinforcement, and behavioural insight into a faster response loop for identity, support, and engineering workflows.












