
Security Awareness vs Secure Behaviour: Why Training Fails and What Actually Reduces Human Cyber Risk
Mar 04

Get in touch with our HRM Specialists
Organisations invest heavily in security awareness training, yet human-initiated cyber incidents persist because knowledge alone does not reliably translate into secure behaviour during real work. Research shows training can improve understanding of threats, but measurable changes in actual behaviour are often minimal without reinforcement or additional interventions.
Operational studies likewise indicate that traditional anti-phishing training may have little effect on whether employees click malicious links or report suspicious messages, highlighting a persistent gap between awareness and action. Social engineering attacks continue to succeed because they exploit human vulnerabilities such as decision-making under pressure, routine behaviour, and contextual factors rather than purely technical weaknesses.
Generative AI is intensifying this challenge from both outside and inside organisations. Attackers can use AI to produce highly personalised phishing emails at scale, with research showing AI-generated messages can perform comparably to those written by human experts. Surveys also show many users struggle to distinguish AI-generated phishing from legitimate emails. At the same time, employees increasingly use AI tools in everyday office workflows, sometimes entering sensitive information into public or unapproved systems (shadow AI), creating new risks of data exposure.
This article explains why awareness alone fails, how behaviour and context drive security outcomes, how GenAI expands both external and internal risk, and which evidence-based approaches can more effectively reduce human-initiated incidents in modern enterprises.
Why security awareness training rarely changes employee behaviour
Security awareness training is built on a simple assumption: if employees understand how attacks work, they will act more safely. In practice, that link is far weaker than organisations expect. Research consistently shows that while training improves knowledge, confidence, and attitudes toward security, it does not reliably change how people behave when a real message lands in their inbox. In other words, employees may know the signs of a phishing email and still click it when the situation feels urgent or routine.
Controlled studies reinforce this gap between awareness and action. Experiments have found that employees who completed training often perform no better in phishing simulations than those who have not received training, suggesting that information alone does not translate into safer decisions. Real-world enterprise data tells a similar story: staff who had finished annual training within the previous month performed about the same in phishing tests as colleagues who had not trained for over a year.
One reason is simply that people forget. Skills gained immediately after training can fade quickly if they are not reinforced, sometimes within a few months. As familiarity declines, employees tend to fall back on their usual habits rather than the procedures taught in a course.
More fundamentally, knowing what to do is not the same as doing it. Reviews of multiple studies show that behaviour during real work is shaped by distractions, workload, and time pressure, not just awareness. Large operational studies therefore find that recent training often has little measurable impact on whether employees fall for phishing attempts, underscoring how difficult it is to turn awareness into lasting habits.
Even if awareness consistently improved behaviour, organisations would still face a difficult challenge. Not all phishing attempts are equal, and some are designed so convincingly that training alone cannot counter them.
Some phishing attacks work against well-trained staff
Even well-trained employees are not immune to phishing, because the effectiveness of an attack often depends more on how the message is crafted than on how much training the recipient has received. Real-world data shows that failure rates can vary widely based on persuasion tactics, timing, and how legitimate the request appears. Messages that mimic everyday work communication or come from seemingly trusted sources are far more likely to succeed than generic scam emails.
Controlled studies support this pattern. Researchers have found that the inherent difficulty of a phishing email, how convincing, relevant, or contextually believable it is, predicts user behaviour more strongly than prior awareness training. In other words, a highly realistic message can bypass even well-informed users because it fits seamlessly into their normal workflow.
This helps explain why sophisticated social engineering campaigns continue to succeed inside organisations that invest heavily in training. Attackers are not simply targeting uninformed employees; they are deliberately designing messages to trigger trust, urgency, or routine responses. When an email appears to come from a manager, a colleague, or a familiar service, recipients are more likely to act quickly rather than pause to analyse it.
But message design is only part of the story. The circumstances in which employees encounter these messages are just as important, because decisions are rarely made in calm, controlled or ideal conditions/environments.
Human decisions under pressure drive security failures
Security failures rarely occur in calm and controlled conditions. They happen in the middle of everyday work.
Employees are often responding between meetings, dealing with urgent requests, or clearing backlogs, which leaves little time to examine messages carefully. Researchers increasingly describe phishing as an attention problem rather than a knowledge problem, especially in fast-paced operational environments where cognitive load is high.
Even organisations with strong technical controls remain vulnerable because attackers target human reactions, not just systems. Phishing emails are deliberately crafted to provoke automatic responses, such as replying quickly to a senior executive, resolving an urgent issue, or completing what appears to be a routine task. These cues exploit normal workplace dynamics such as hierarchy, trust, and time pressure, making risky actions feel reasonable in the moment.
Susceptibility also varies widely between individuals and situations, showing that behaviour is shaped by context, workload, incentives, underlying human needs and not just awareness. Research on human factors highlights how stress, distraction, and environmental pressures influence decision-making in cybersecurity contexts.
Insights drawn from Maslow-based human-centric frameworks reinforce this idea: when basic needs such as job security, trust in leadership, or psychological safety feel threatened, people prioritise those concerns over careful security judgement. Organisations often invest heavily in technology while overlooking these human foundations, which means decisions made under pressure can undermine even well-designed defenses.
In practice, this means security outcomes depend less on what employees know and more on the conditions in which they are forced to act. If risky behaviour is shaped by context, habit, and pressure, it follows that short bursts of training are unlikely to produce lasting change. Yet many organisations still rely on periodic awareness sessions as their primary defence. Evidence suggests this approach creates temporary caution rather than durable behavioural improvement.
One-off training does not produce lasting secure behaviour
A single awareness session can create a brief spike in caution, but that effect rarely survives contact with everyday work. Studies show people are often better at spotting suspicious emails immediately after training, yet those gains fade as weeks pass and the material is no longer top of mind. Without reminders or practice, employees fall back on familiar routines, which usually prioritise speed and task completion over careful scrutiny.
Real-world evidence suggests that isolated training events have little impact on improving outcomes in practice. In one large enterprise study tracking thousands of employees over multiple phishing campaigns, those who had recently completed mandatory training were no less likely to fall for simulated attacks than those who had not. This gap highlights a broader problem: awareness may increase immediately after training, but behaviour in real situations is shaped by habit, workload, and context rather than memory of a course.
Controlled research points to the same conclusion. Traditional classroom-style instruction or one-time modules tend to produce only marginal improvements, suggesting that simply delivering information is insufficient to alter how people behave under pressure. More adaptive approaches work better because they reinforce behaviour continuously. Studies of embedded phishing training show that improvements often come from repeated reminders and nudges, not from the training content itself, which helps build lasting habits over time.
These challenges already make human risk difficult to manage, but the threat landscape is not standing still. New technologies are changing both how attacks are conducted and how employees work, introducing additional layers of complexity. Generative AI, in particular, is amplifying risks on multiple fronts.
Why GenAI creates new human risk
Generative AI is reshaping social engineering by making it easier to produce highly convincing messages quickly and at scale. Modern language models can automatically generate personalised phishing emails tailored to specific targets, dramatically increasing both the volume and realism of attacks employees must evaluate. Studies show AI-generated messages can perform on par with human-crafted phishing, meaning attackers no longer need extensive time or expertise to create persuasive content.
At the same time, AI is not just an external threat - it is now embedded in everyday office workflows. Employees use AI tools to draft emails, summarise documents, analyse data, and write code, often under tight deadlines. This speed can reduce the time available for careful scrutiny of requests or outputs, increasing the likelihood of mistakes. Because phishing still requires a human response to succeed, GenAI does not replace traditional risks; it amplifies them by accelerating both communication and decision-making.
The internal risk may be even more significant. Many employees use public or unapproved AI tools to complete work tasks, a practice often referred to as “shadow AI.” Reports show that large numbers of workers paste sensitive corporate information into these systems, sometimes from personal accounts outside organisational oversight. In some studies, over half of such interactions involved confidential data, creating new pathways for accidental exposure or compliance breaches.
GenAI expands human risk on both fronts: by enabling attackers to scale deception externally and by introducing new avenues for unintended data leakage inside the workplace. Given that awareness alone is insufficient and risks are increasing, the key question becomes practical rather than theoretical. Organisations need approaches that produce measurable improvements in real-world behaviour, not just knowledge. This shifts the focus from education to intervention.
Behaviour-focused interventions that actually reduce risk
If awareness alone doesn’t change behaviour, what does?
Research increasingly points to interventions that shape decisions in the moment, not just knowledge beforehand. Real-time prompts, embedded warnings, and adaptive guidance can significantly improve how people respond to suspicious messages because they influence attention at the exact point of action. Experimental systems that provide visual cues while users evaluate emails have been shown to increase phishing detection accuracy from roughly 75% to over 90%, demonstrating the power of immediate behavioural support.
Ongoing simulations with feedback also play a critical role. Longitudinal studies across multiple organisations show that continuous phishing exercises combined with targeted training can substantially reduce successful compromises, in some cases cutting rates by half within months. These improvements occur because repeated exposure builds practical judgement and reinforces secure habits rather than relying on memory of rules.
Crucially, effective programs focus on experience, not instruction. Scenario-based simulations, gamified exercises, and adaptive feedback help employees practise responses under realistic conditions, which improves retention and transfer to real situations. Evidence suggests such multi-layered behavioural approaches can reduce breach rates by up to 52% while strengthening organisational resilience.
Behavioural science frameworks reinforce why these methods work. Techniques such as nudging, feedback loops, and habit formation target the psychological drivers of action, such as motivation, attention, and perceived risk, rather than assuming people will apply abstract knowledge under pressure.
Putting these ideas into practice requires more than tweaking training methods. It calls for a broader rethink of how organisations manage human cyber risk as a whole.
Designing behaviour-centric security strategies beyond awareness
Research increasingly recommends shifting from awareness-based programs to behaviour-centric security strategies that reflect how people actually work. One-time training cannot account for the wide variation in risk exposure across roles, departments, and contexts, so effective programs tailor interventions to specific behaviours and vulnerabilities. Analysts note that human-risk management approaches aim to protect people with minimal effort on their part by adapting policies, tools, and training to real behaviour rather than assuming knowledge will drive action.
Crucially, organisations are also rethinking how success is measured. Completion rates and quiz scores indicate activity, not protection. Behaviour-centric programs instead track outcomes such as phishing reporting rates, response times, and reductions in risky actions, which are metrics that directly reflect how employees behave in real situations. This shift recognises that security outcomes depend on habits and reflexes, not on whether someone finished a course.
Effective strategies also treat security as a continuous process rather than a periodic event. Because threats evolve, especially in AI-enabled environments, programs must adapt based on real-world behaviour data and emerging risks. Long-term studies show that sustained, targeted interventions reduce susceptibility over time, reinforcing the need for ongoing adjustment rather than static instruction.
Ultimately, designing security beyond awareness means embedding secure behaviour into everyday workflows, culture, and decision-making. The goal is not to make employees security experts, but to create conditions in which the safest action becomes the easiest and most natural one.
That's a warp!
Human cyber risk is driven less by what employees know and more by how they behave during everyday work. Evidence shows that sustained, behaviour-focused interventions, particularly those that provide guidance and reinforcement at the moment of decision, are more effective than awareness alone in reducing incidents.
As organisations adopt AI-driven tools that increase both productivity and exposure to sophisticated attacks, managing human risk increasingly requires continuous measurement, contextual guidance, and adaptive support rather than periodic training. This shift toward behaviour-centric security is reflected in modern human-risk management approaches used by leading enterprises and platforms such as OutThink, which focus on influencing real decisions instead of simply delivering information.
Sources
- https://www.cybersecuritydive.com/news/cybersecurity-awareness-training-research-flaws/803201
- https://arxiv.org/abs/2506.19899
- https://ejournal.resincen.org/index.php/quanta/article/view/30
- https://arxiv.org/abs/2412.00586
- https://www.technewsworld.com/story/data-in-the-wild-40-of-employee-ai-use-involves-sensitive-info-180156.html
- https://nypost.com/2025/10/03/tech/most-adults-couldnt-differentiate-between-authentic-ai-phishing-emails/
- https://www.sciencedirect.com/science/article/pii/S240584401935666X
- https://expertinsights.com/security-awareness-training/phishing-training-study-bh25
- https://cybernews.com/security/phishing-training-doesnt-reduce-phishing-failure-rates/
- https://arxiv.org/abs/2409.01378
- https://pmc.ncbi.nlm.nih.gov/articles/PMC6606995/
- https://www.cisoplatform.com/profiles/blogs/what-maslow-s-hierarchy-of-needs-reveals-about-cybersecurity-flaw
- https://arxiv.org/abs/2412.00586
- https://www.sciencedirect.com/science/article/pii/S221421262400262X
- https://www.techrepublic.com/article/news-employees-share-company-secrets-on-chatgpt/
- https://arxiv.org/abs/2106.06907
- https://arxiv.org/abs/2510.27298
- https://www.emergentmind.com/topics/phishing-resilience-training
- https://www.cybersecuritydive.com/news/security-awareness-training-strategy/733468/



























































































