No one wishes to become entangled in a cyberattack, either professionally or personally. Yet, an escalating number of individuals are becoming victims of such online threats. These attacks are not only becoming more frequent but also more severe, with the human element often being at the center of them.
Forrester estimates that 9 in 10 data breaches this year will include some sort of human element that allows information to be taken, money to be stolen, or identities to be compromised. Looking at the scale of the issue, we at SoSafe found that 1 in 2 businesses experienced a successful cyberattack in the past three years – and 64% assess their risk of falling for another one as high. Being involved in an incident is increasingly the norm rather than the exception.
This rise in breaches comes as an ever-greater portion of the workforce is digital-native, savvy internet browsers who have spent most of their lives online. How, one might ask, are people still falling for these attacks?
Cybercriminals are leveraging AI technology to successfully target humans
A large portion of the answer is that an attempted cyberattack today often looks nothing like it did five or two years ago, or even last year. Criminals and bad actors have long understood that the human factor is a reliable entry point into systems, bypassing technological defenses. They harness emerging technologies’ power to supercharge their hacking capabilities. The same AI-based tools and LLM models, which promise to revolutionize customer service and product development, are being used to make spurious requests for information that seem even more trustworthy. Tools with names like WormGPT and FraudGPT are spreading through hidden message boards and passed around corners of the dark web. This is what an email generated by WormGPT could actually look like:
By leveraging generative AI, emails that attempt to access information can be generated up to 40% faster than previous methods. And what’s generated is fooling people on a larger scale. At SoSafe, we were able to create emails (in simulated attacks) that 78% of employees opened, while 65% disclosed personal information and 21% clicked on malicious links or attachments.
Also, scammers can now incorporate industry and company-specific information and then craft grammatically correct, well-formatted messages. The tell-tale signs of a phishing attempt – the strange fonts, the bizarre syntax, the unfamiliar file types – may be absent in these new forms of attacks. Worst still, the power of AI means that each attack can be individualized to touch on topics and content that you will personally find compelling. The age of mass personalized spear phishing is here.
Technology plays a role in keeping the hordes at bay, and it is effective, but it’s not enough. Professional hackers working together, given enough time, will likely overrun any technical defense that IT puts in place. This will take time and effort and likely require innovation and perhaps zero-day exploits. It’s much easier to attack an authorized user who can pass through the technical controls with the access privileges they have been assigned.
Users have become our ‘primary attack surface.’ This is the easiest way for an attacker to gain access to our systems, data, and resources – all too often done by a staff member attempting to be helpful or reactive to a customer demand or life crisis. A stark example is what happened to the software development company Retool. Using AI, the attackers deepfaked IT personnel voices to bypass MFA codes after already having obtained the employee’s credentials via a smishing attack. This allowed them to access the accounts of over 27 customers in the crypto industry. Because of this, many customers lost millions in cryptocurrency, including users like Fortress Trust that lost $15 million.
This is the “human factor” in cyber defense. People are on the frontline and need to be seen as assets in the fight rather than something that is ‘allowing’ intrusions to happen.
We need to transform awareness and training into safe behavior
Education is vital to managing human risk. The first stage is awareness of the problem. Bad actors are incentivized to steal valuable information and resources from a company. Compliance frameworks have long understood that importance, confronting security leaders with a series of requirements addressing the human layer risk. However, checking the compliance box is not enough because these frameworks only focus on communicating generic best practices, not genuinely changing behavior. Too often, security training essentially stops here, but it’s not enough to simply meet that baseline compliance requirement because people will rarely be given sufficient tools and insight to be an active part of a company’s defense. The worrying part? A recent NIST study found that 56% of security executives still believe compliance is the most important indicator of SA&T’s success. However, even though they viewed compliance as an important indicator of success, there was also a clear recognition that it may not reflect actual effectiveness in behavioral and attitude change.
Cyber security training can be dull – endless slides that require a user to click every 20 or 30 seconds to ‘ensure participation,’ mindless quizzes with brain-thuddingly obvious answers. This is no longer fit for the purpose. Companies must work to move their organizations beyond the awareness basics that don’t cut it when facing this escalating threat landscape.
Instead, programs need to identify and prioritize human risks specific to a particular company and then create a corrective action plan to address these issues. This will create and spread behaviors that allow people to identify, understand, and respond to threats. These programs need to consider the range of human-related risks and craft behavior to counteract the threat. They must use cultural influences, motivational factors and attitudes, context, and emotional responses. There needs to be a focus on the principles behind safe and secure ways to interact with digital information and use communication tools, which will be valid even if the format or underlying technology shifts.
Training should be engaging. Yes, it needs to cover relevant information, but it especially needs to ensure people learn to apply their knowledge, build good security habits, and understand why these things are important. Good news: We can leverage long-proven psychological approaches. In practice, this means offering a multi-channel experience and contextual learning opportunities wherever people are. These programs create bite-size chunks over huge blocks of text, employing tactics like gamification, continuous and spaced repetition, interactive components, contextual nudging, and storytelling – all while focusing on positive reinforcement instead of learning through fear.
Behavioral-based human risk management is our only chance to combat the burnout crisis facing security teams today
Companies that don’t act risk overwhelming the specialists dealing with these threats. Burnout in security teams is increasing. 66% of security professionals in the United States and Europe suffer significant work stress, while 3.9 million cyber security positions are currently unfilled globally. Professionals need help from everyone else to fulfill their mission. Cyber security needs to become a joint responsibility – and humans have the power to fight against cybercrime. Equipped with security instincts, they become the biggest ally and most versatile part of companies’ defense strategies for sustainable risk reduction.
The best way to appreciate the transformation that needs to happen is through one of the most basic common analogies: give a man a fish, he’ll eat for a day; teach him to fish, and he’ll never go hungry. Here, a fish is strictly a technological approach, which may stop one threat but doesn’t solve the more significant issue. Only through empowering frontline staff through a holistic human risk management program will companies be able to build resilience and sustainably mitigate cyber risk, ensuring they are set for the long term.
How can SoSafe help you manage and reduce your human risk?
Built on psychology and behavioral science, SoSafe is a leading human risk management platform focused on making secure behavior second nature. We believe people want to do the right thing but often need support to succeed, especially in today’s world, where AI-driven threats are advancing. That’s why we focus on creating security cultures that not only protect against digital threats but also involve individuals in reducing human-related risks.
SoSafe helps organizations achieve this by focusing on different key areas: gamified training with storytelling, personalized learning experiences, including phishing simulations with detailed walkthroughs, and ongoing support to ensure the training is effective and continuous. Our Phishing Report Button empowers employees to take action against threats, protecting the entire organization. But there’s more: Our conversational bot, Sofie, enables you to rapidly connect with your employees on collaboration tools. Enable rapid micro-learning to address emerging threats, grow the culture by having 24/7 access to the first line of security support, and transform users into your strongest defense.
On top of that, our Risk Scoring and Culture Automation dashboard helps you ingest first- and third-party data, track activity and risk metrics, assess human risk, identify vulnerabilities, and make data-driven decisions, all in one place.