Is security theatre costing us our personal freedom?

Security illusion
By focusing on making us all feel safer, are we allowing real threats to pass unseen?

There's a new threat creeping into our online lives, reducing how secure we are while we browse the web both at home and at work, yet strangely it's making us feel more secure.

This threat is a concept called security theatre, a term coined by security expert and author Bruce Schneier back in 2003. He defined it as, "Security measures that make people feel more secure without doing anything to actually improve their security."

He was referring mostly to the knee-jerk reaction seen in the field of airport security and the new measures that were implemented after the 9/11 attacks, most of which would have done nothing to stop the terrorist attacks even if they were in place beforehand. Even those measures that would've directly challenged how the terrorists operated would merely have forced them to make minor changes to their plans.

However, security theatre isn't limited to transport security. It's evident in the policy making of a lot of corporate security procedures and is even invading our homes. We're already spending time and money thinking we're protecting ourselves, whereas in fact we're actually doing nothing to increase how safe we are and may even be putting ourselves more at risk.

It's an easy trap to fall into because this is how we're psychologically wired, as Schneier explains in his article entitled 'In Praise of Security Theatre'. "Security is both a reality and a feeling," he writes. "The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures… Security is also a feeling, based on individual psychological reactions to both the risks and the countermeasures. And the two things are different: you can be secure even though you don't feel secure and you can feel secure even though you're not really secure."

This explains the ridiculous situation of having to divide a 200ml bottle of baby milk into two 100ml bottles before being allowed to board a plane. Clearly you still have the same amount of liquid, so if it was dangerous this hasn't limited the risk in the slightest.

It seems, though, that people really are reassured when they see officials doing something in response to a security threat, even if what they're doing, when you think about it logically, only covers off very specific risks, which could easily be sidestepped by anyone truly committed to doing you harm.

We're not just talking about spending millions on airport full-body scanners whose software can't identify low density substances like harmful chemical powders or liquids, and would be unlikely to detect plastic weapons. Or the forced implementation of face-scanning software, which is on average only 30 per cent accurate.

Or even pre-screening of boarding information to target riskier travellers, which has been proved not to work, as the criteria by which they choose who is safe and who isn't is widely available and so easily sidestepped - just choose someone who is as far away from that profile as possible to do your dirty work for you. Now your terrorist can just walk in and is far more likely to not be picked to be searched.

In this case the security policy has actually provided a description of the type of person you need to use to avoid being detected as a potential terrorist. However, it's not just airports that are havens for security theatre. You encounter it in much more common areas, from your place of work to your home, and there are many people out there trying to highlight this.

In fact, after talking to a cross-section of them, we're willing to bet that you have probably performed one example of computer security theatre, or actually felt guilty about not doing it, in the past month, if not more recently.

Security theatre and you

Don't believe us? The world of IT security is rife with examples of security theatre of one type or another, with differing levels of risk associated with doing them. Everything from automated password changes, backing up work to external drives, using antivirus software, to acting on website security certificates is costing you time and, therefore any company you work for, money and yet is likely to be ineffectual at making you any safer online.

Time is an important economic factor and even your own personal free time is slowly being eroded by the patchwork of policies people adopt to make themselves feel safer while using computers. Nowadays, there are so many things a conscientious home PC user is expected to do that we're in danger of becoming overwhelmed and a lot of it is demonstrably not worth it.

We're expected to choose the best antivirus and firewall software for ourselves, ensure it's regularly updated by downloading the latest security updates, watch our computer slow to a crawl as regular background checks and scans are carried out, update our browsers to ensure the software we use to browse the web doesn't have security flaws from using outdated connection methods, check out websites' security certificates (see 'Security theatre certificates', opposite) - often choosing to ignore them when you're pretty certain the warnings you're getting, as seriously worded as they are, are false positives - and back up all our important files to a spare hard drive on a regular basis.

It isn't all a waste of time, of course, but a lot of it is. As Schneier explains: "Like real security, security theatre has a cost. It can cost money, time, concentration, freedoms and so on. It can come at the cost of reducing the things we can do. Most of the time security theatre is a bad trade-off, because the costs far outweigh the benefits."

The economics

It's not just Schneier who recognises this. Microsoft has even put together a research paper on the subject. Written in 2010 by Cormac Herley and with a title giving a nod to the late, great Douglas Adams, an author who was known for ridiculing the bureaucracy so often found in corporate life, Herley's paper, 'So long, and no thanks for the externalities: The rational rejection of security advice by users', exposes the dangers of strict adherence to IT security guidelines when it's clear they aren't much use in the real world and can even be counterproductive.

Although the paper is focusing on the costs of implementing useless security procedures, it's highlighting security theatre just the same. As the paper's abstract states, "It is often suggested that users are hopelessly lazy and unmotivated on security questions. They choose weak passwords, ignore security warnings and are oblivious to certificates errors. We argue that users' rejection of the security advice they receive is entirely rational from an economic perspective."

In other words, the indirect costs of, for example, getting your staff to update their password every couple of months, and then dealing with the fallout of people forgetting their passwords, is not the most economic use of time, especially, as we shall see, this doesn't seem to offer any more security.

Automated password changes

Although on the face of it regular password changes seem an excellent idea, it has actually led to a lot of people's work passwords being a lot less secure than if they had just left them as the original in the first place.

We spoke to Graham Cluley, computer security consultant for internet security firm Sophos, and he agrees. "Many firms regularly ask users to change their passwords," he says. "The idea appears to be that this makes the password more of a moving target. If your firm tells you that you ought to change your password every month or every three months, it can actually be a negative thing. Making people choose new passwords all the time may make them choose weaker ones, or ones that are more predictable. For instance, users might adopt poor habits and choose passwords such as Password1, Password2, as they change it each month."

He adds, "Of course, you should change passwords if you believe a password has been compromised but you shouldn't give a blanket rule for the entire organisation telling them to all change their passwords every eight weeks, unless there's a good reason."

If you have a system like this at your work, it's likely that something generic has replaced your original personal and unlikely-to-be-guessed password long ago, just so you can remember it and because these systems often won't let you revert to old passwords.

What's slightly more strange is, if you analyse it, the logic behind regularly changing your password is flawed in the first place. Firstly, what is the reason to change your password? In case it has been already been compromised? This would assume that the criminals who have found out your password have, for some reason, been sitting on this information before using it to their advantage in any way. That's a bit like someone stealing your house keys out of your bag, then the thief waiting until you've had a chance to change the locks before attempting a burglary.

The only way this practice could help is if you managed to change your password the second it was compromised, a reasonably unlikely occurrence if you aren't aware it has happened.

The second problem with following this advice blindly is that the easiest way to get malicious software onto a person's PC, and therefore gain access to a corporate network or their personal bank details, is not to try to guess passwords, it's to go for the weakest link in any chain - the users themselves.

As Schneier says, "Only amateurs attack machines; professionals target people." And this quote seems to be backed up by Herley's Microsoft research paper when he states, "The best way to get software onto any machine is to get the user to install it and human error is behind many of the most serious exploits."

So a hacker doesn't even need your password - they enter your system via the backdoor you create by installing their software.

How about other password rules then, like those used by banking websites insisting on length, that your password isn't a word in the dictionary, that it needs to contain special characters and that you shouldn't share it between sites? Surely these are sensible?

Not when you consider that, as mentioned, cracking a password isn't how most criminals attempt to get into your system. After simple human error, the two most common ways that people gain access to restricted areas and accounts is phishing (getting you to reveal your password by making it look like you're logging into your account when you aren't - for example, those fake bank emails with a 'login here' button that takes you to a site that looks like, but isn't, your bank's) and key logging (a program installed on your, or a shared, PC that records every keystroke you make, which keeps logs that can be looked through to find likely passwords).

Here again, it doesn't matter how unbreakable your password is if the criminals know what it is. Which slightly makes a mockery of all that time you spent thinking up and memorising those unique passwords.

Real threat - bad responses

That's not to say that the threat isn't real. As Herley explains: "The range of attacks directed against internet users is vast and growing. Their computers are constantly targeted by viruses, worms, port scanning software, spy-ware, adware, malware, keyloggers, rootkits, and zombie and botnet applications. One study reports that an unpatched Windows PC will be compromised within 12 minutes of connecting to the internet."

Herley acknowledges that security procedures are needed, but he questions whether security experts are right to assume they know the right thing to do: "In one view of the world users are ignorant of the risks they face and must be educated to save them from themselves. 'If only they understood the dangers,' the thinking goes, 'they would behave differently.' However, this presupposes that we (security experts) understand the risks better than users. Do we? Do we have evidence to demonstrate that users who follow advice fare better than those who ignore it and that the difference is worth the extra effort?"

In Herley's view, the short answer to these questions is no. And it's easy to see why. With a complete lack of information available on how criminals are getting the details of the user accounts they hack, are we right to force users to follow a prescribed set of rules that don't bear close examination?

Herley's paper also comes up with some, admittedly rather basic, calculations on just what the cost of these types of procedures can have in terms of user effort if you look at the whole of America. He estimates that with 180 million online adults in the US, all of whom he's assumed to have a salary of twice the minimum wage ($7.25), then the cost of one minute's work choosing a new password over the year would be equal to about $15.9 billion a year. That's a lot to spend on a pretty much pointless exercise.

We asked Bruce Schneier why corporations insist on carrying out these security theatre-type procedures when they must be aware of the small effect they have? His answer is simple: "What makes you think they're aware?"

In response to the same question, Graham Cluely adds, "Sometimes security issues are complex and IT teams might feel it's easier to go with the crowd and please staff and senior managers by being seen to do a proper job, rather than concentrate on the things that really matter."

David Emm, senior security researcher at Kaspersky Lab, believes this is compounded by the fact it's very difficult for companies to know how and where to invest in security, and it's this uncertainty that can lead to security policy decisions that may turn out to be a waste of time and money.

"Sometimes there's a risk that security theatre may creep into the way potential online dangers are framed," he says. "For example, from time to time there's a heavy focus placed on the costs associated with cybercrime. But the numbers quoted vary dramatically. It's actually impossible to put an accurate figure of the costs. It's a bit like trying to calculate the earnings of drug dealers - the criminal nature of their activities makes it guesswork at best. The reason why numbers are perceived to be important is to provide justification for security investment - it seems like a tangible way of showing what the benefit of the investment will be. I believe it makes more sense for companies to look at the real risks they've faced - let's say during the previous quarter or year - and show how security measures have, or will, mitigate the risks."