Navigation:
The security professionals in the survey use a variety of methods to adjust their cybersecurity approach to keep abreast of evolving AI threats.
This indicates that the majority of organisations aren’t yet using real-time signals in cybersecurity.
The most common adjustment is the use of real-time signals to constantly monitor for cybersecurity threats (40% of respondents). Another 39% use structured scanning methods, including regular review processes to identify where changes are required. External intelligence from vendors, CERT or government advisories is also used, according to 35% of security professionals. Peer and community sharing in industry groups and ISACs are favoured by a further 35%. Disturbingly, 35% of respondents say they catch up reactively, only making changes after alerts or incidents. This latter approach obviously provides no pre-emptive protection, leaving the organisations that use it extremely vulnerable.
This shows that most organisations have recognised that the human element is crucial in combatting social engineering attacks.
Employee behaviour change is now a security performance problem. While people remain a major vulnerability, they are also now one of the best forms of protection when integrated into an adaptive loop. This reflects the realisation that personalised AI-driven social engineering targeting individuals is a primary method that hackers are using to circumvent computerised attacks that can be countered by software and technology.
As a result, companies are adapting employee-facing defence in a number of areas. In fact 95% of the surveyed security professionals believe that their organisation is adaptive to:
If truly reported by these professionals, this is indeed a good sign, indicating that the majority of organisations are adopting adaptive defence strategies that give them the insights and power to counter AI-generated attacks.
The vast majority of survey respondents (88%) said that they are "likely to invest in building an adaptive, behaviour-driven security culture" in the coming year and beyond. Broken down further, 43% say it is "very likely", with 45% saying that it's "somewhat likely". On the surface, this reflects broad recognition that adaptive defence is necessary in an AI-driven threat environment. However, the more revealing figure may be the remaining 12% of respondents who don't have any plans to invest in adaptive, behaviour-driven security culture despite the clear importance and benefits.
There is also nuance within the 88 percent. “Likely” does not necessarily translate into committed budget, board-level decisions or operational execution. Intent and implementation are not the same. While most organisations acknowledge the importance of adaptive defence, a meaningful minority either remain unconvinced or unconvinced enough to allocate resources.
The more strategic question is not whether organisations recognise the value of adaptive defence, but whether they are prepared to treat it as core infrastructure rather than a discretionary enhancement.
Having established that modern cybersecurity needs AI assistance combined with human behavioural change in an agile adaptive defence model, it's important that secure behaviours are consistently applied. Many organisations haven't yet reached the desired maturity levels for this.
Humans are fallible. No matter how much training and guidance they receive, people remain vulnerable to psychological techniques that tap into deep-seated instincts and habitual behaviours. To build an effective playbook, we must address the psychological mechanisms hackers exploit:
The top factor security professionals surveyed said best explains why secure behaviour is not followed consistently in their organisation today is guidance is too generic, so it does not fit real workflows."
Surveyed security professionals list several reasons that make it challenging to maintain consistently secure human behaviour. Generic training and guidance that don't match real-world situations is the most cited problem, with 38% of respondents saying that this applies to their organisations. Another 34% say that expectations are not reinforced consistently by management or teams. The same percentage report that people do not recognise risk or do not know the right actions to take.
From this we can deduce that a large proportion of organisations face challenges in changing employee behaviour. Generic training needs to be replaced with more accurate simulations of real-world scenarios that people encounter in their daily work lives. These can significantly improve risk recognition and teach appropriately secure responses. Managers need to constantly reinforce these through gentle guidance, not punitive measures.
Another problem encountered by 34% of respondents is that people tend to take the path of least resistance, thus deviating from the security process. When they perceive that the secure path is too onerous they look for shortcuts, use unapproved tools and skip steps. This suggests that organisations need to create more seamless, user-friendly security processes to achieve higher employee buy-in and get them to follow the correct steps.
Workload and time pressure can also cause people to ignore security measures if they feel that they are overloaded, as reported by 30% of security professionals. Once again, the answer is to integrate cybersecurity in ways that don't add to employee workloads and increase pressure to deliver. A further 33% said that secure processes are more difficult to roll out to employees who work remotely and to contractors.
This calls for wider education and stronger collaboration. Modern AI-driven attacks threaten an organisation's entire ecosystem, which requires all stakeholders to cooperate in a cohesive cyber defence strategy.
Notably, only 34% of security professionals believe that their employees generally follow secure behaviour consistently. This highlights a failure in "Transfer Learning" – the ability to apply training to real-world tasks.
This shows that roughly four in five companies are either ignoring direct behavioural insights or responding too slowly.
Our survey shows that 21% of organisations still rely on "box-ticking" (policy sign-offs). Only 19% have the ability to derive human insights and rapidly adapt their security posture accordingly. While another 19% also use these insights, change implementation lags behind. This leaves the organisations treading water in the face of a rapid attack, unable to respond quickly enough.
We also see that 18% of companies use people and security telemetry to inform security decisions and board reporting. This can be a powerful combination, as long as there isn't too much reliance on data rather than human input. Adaptive defence needs a well-balanced blend of the two.
Somewhat strangely, the results show that 11% of companies do measure security awareness levels, but don't change anything as a result. Another 11% admit that they can't trust their data. This lack of awareness or action leaves them extremely vulnerable, even to traditional non-AI attack vectors.
The survey results show that when someone reports a mistake or a near miss, 26% of organisations provide quick, helpful feedback, emphasising continuous learning. However, most say that the result is a mixed experience, depending on who their manager is or what team they're in.
The other responses aren't nearly as encouraging. Taken together, 42% of professionals say that reporting is either limited or avoided completely, presumably for fear of repercussions. Of these, 23% say that their employees worry about the consequences so they limit their reporting. Almost a fifth (19%) of respondents admit that staff either rarely report or avoid it completely. This strongly indicates that these organisations need to cultivate a more supportive and encouraging culture for employees to report incidents and make it clear that this will be appreciated and not punished.

Ensure that when an employee reports a threat or makes a mistake, they receive immediate, helpful feedback. This "metacognitive calibration" is what builds a resilient culture.
Deliver reinforcement (e.g., AI Copilots) in Teams or Slack so security doesn't feel like an additional workload.
Use our online test environment to see how our platform can help you empower your team to continuously avert cyber threats and keep your organization secure.

This page is not available in English yet.
Diese Seite ist noch nicht in Ihrer Sprache verfügbar. Sie können auf Englisch fortfahren oder zur deutschen Startseite zurückkehren.
Cette page n’est pas encore disponible dans votre langue. Vous pouvez continuer en anglais ou revenir à la page d’accueil en français.
Deze pagina is nog niet beschikbaar in uw taal. U kunt doorgaan in het Engels of terugkeren naar de Nederlandse startpagina.
Esta página aún no está disponible en español. Puedes continuar en inglés o volver a la página de inicio en español.
Questa pagina non è ancora disponibile nella tua lingua. Puoi continuare in inglese oppure tornare alla home page in italiano.