Still treating users as the enemy: entrapment and the escalating nastiness of simulated phishing campaigns

Prof. M. Angela Sasse Chief Scientific Advisor OutThink - Still treating users as the enemy: entrapment and the escalating nastiness of simulated phishing campaigns

This article by Steven J. Murdoch and Angela Sasse previously appeared on Bentham’s Gaze, the blog of the UCL Information Security group.

Three years ago, we made the case against phishing your own employees through simulated phishing campaigns. They do little to improve security: click rates tend to be reduced (temporarily) but not to zero – and each remaining click can enable an attack. They also have a hidden cost in terms of productivity – employees have to spend time processing more emails that are not relevant to their work, and then spend more time pondering whether to act on emails.

In a recent paper, Melanie Volkamer and colleagues provided a detailed listing of the pros and cons from the perspectives of security, human factors and law. One of the legal risks was finding yourself in court with one of the 600-pound digital enterprise gorillas for trademark infringement – Facebook objected to their trademark and domain being impersonated. They also likely don’t want their brand to be used in attacks because, contrary to what some vendors tell you, being tricked by your employer is not a pleasant experience.

Negative emotions experienced with an event often transfer to anyone or anything associated with it – and negative emotions are not what you want associated with your brand if your business depends on keeping billions of users engaging with your services as often as possible.

Recent tactics employed by the providers of phishing campaigns can only be described as entrapment – to “demonstrate” the need for their services, they create messages that almost everyone will click on. Employees of the Chicago Tribune and GoDaddy, for instance, received emails promising bonuses.

Employees had hope of extra pay raised and then cruelly dashed, and on top, were hectored for being careless about phishing. Some employees vented their rage publicly on Twitter, and the companies involved apologised. The negative publicity may eventually be forgotten, but the resentment of employees feeling not only tricked but humiliated and betrayed, will not fade any time soon.

The increasing nastiness of entrapment has seen employees targeted with promises of COVID vaccinations from employers – who then find themselves being ridiculed for their gullibility instead of lauded for their willingness to help.

Creating negative experiences and antagonising your employees in the name of security is the wrong way to go. The paper Users Are Not The Enemy is much-cited for its early detection of impossible password policies; but the title pointed to the conclusion that waging war on employees in the name of improving security creates a negative perception of security, ties up resources, and thus benefits only the real, external enemy – attackers.

The feeling of betrayal that results from entrapment phishing destroys a most precious resource – employee trust and goodwill. Beris et al. pointed out that knowing about security risks is not enough – when employees don’t feel positive about the company and its security measures, they won’t make the effort when nobody is watching. And people who feel tricked by their company will respond emotionally, wanting to hit back – so targeting employees with nasty security increases the potential for insider attacks.

Tempting employees with promises of potentially lifesaving treatment not only harms their relationship with the company but can damage the effectiveness of legitimate public-health communications while we are amid a pandemic. One vendor will send out misleading vaccine conspiracy-theory emails. The fact that criminals act immorally isn’t an excuse for companies to do the same, nor is COVID-related phishing’s effectiveness. Even the CIA, not known to be paragons of virtue, won’t use vaccination programmes as a lure for their operations.

We have to stop blaming users for failing the impossible task of telling the difference between genuine and fake messages and find better ways of dealing with the threat.

All simulated phishing does is mimic what the attackers do, in the vain hope that users will somehow learn to tell the difference. It is akin to police carrying out burglaries to teach people to lock their doors and windows or fire wardens setting fire to the building to teach people how to evacuate. As for conducting training against bullying and sexual harassment in this way… we’ll leave that to your imagination said Angela Sasse.

Even with employees who have been entrapped and told, yet again, that phishing messages exist and how you are supposed to recognise them, a well-timed and plausible email (important business announcement, traffic disruption) will catch some people out.

Companies can and should do better than leave their employees to deal with these attacks and then adding more of their own. Clicking on links in emails is an essential part of many jobs, and so that is what employees will do, no matter how nasty training becomes. Companies must make everyday activities safe. Two-factor WebAuthn authentication makes passwords collected through phishing worthless to attackers. Malware can be blocked on the network, and end-host protection can catch what makes it through.

Employees have a part to play in protecting a company. Training can improve resilience when it is designed to address the threats facing the company, is adapted for specific employees’ roles, and is delivered in a safe place. The most persistent and skilled adversaries will still find a way into the organisation. Still, harm can be limited if employees report unusual activity and work with their peers and IT. That will only happen if employees feel that they are on the same side as IT security, precisely the relationship damaged by mock phishing.

About the Author:
Dr Steven J. Murdoch is a Royal Society University Research Fellow in the Information Security Research Group of University College London, working on developing metrics for security and privacy. His research interests include authentication/passwords, banking security, anonymous communications, censorship resistance and covert channels. He is also working on analysing the security of banking systems, especially Chip & PIN/EMV, and is Innovation Security Architect at VASCO. He is a Fellow of the IET and BCS.

Professor M. Angela Sasse Freng is a Professor of Human-Centred Technology, University College London, and Director of the UK Research Institute in Science of Cyber Security. Security research: Human Factors in Security, Usable Authentication (Passwords, Biometrics, 2FA), Usable access control, Security education & training, Economics of Security, User-centred privacy, User-cented identity mechanisms and management, Trust. Specialties: Human Behaviour and Technology, Information Security, Computer Security. “I love technology, and want everybody to enjoy the benefits that it can bring.”

Editor’s Note: The opinions expressed in this and other guest author articles are solely those of the contributor, and do not necessarily reflect those of OutThink Ltd.


Share: