To be more secure, you need to be more transparent
Being better prepared means knowing how they're likely to attack
Early in May, the military's head of cyber-security, Major General Jonathan Shaw, admitted to The Guardian that the Ministry of Defence was vulnerable to online attack and that there had already been a small number of serious incidents. What was more shocking was that Shaw said it was a surprise to people just how vulnerable they were.
Presumably, these weren't simple denial of service attacks. They must have been organised, coordinated and planned, with the crackers targeting specific systems and vulnerabilities. The worrying part is the implicit 'head in the sand' approach to some of their security procedures.
It sounds like leaving the front door open and relying on the moral will of its citizens to do the right thing. And that's never going to happen. Not because of any global lapse in morals or a desire to trespass or be evil, but because you can't fight the curiosity of clever teenagers with too much time on their hands.
Opaqueness is a challenge
The only way to do security is, in my opinion, with complete and unblinded transparency. Obfuscating your procedures only gives you a false sense of your non-security. It doesn't add to whatever systems you may have and shows only a lack of confidence in those systems and their abilities to thwart an attack.
A determined cracker will see obfuscation as an invitation and a challenge. It will entice their teenage curiosity. The technology has changed, but I doubt there's a difference in the motivations of today's crackers and those of 20 years ago, scripting their modems to silently dial local telephone numbers throughout the night in the hope of seeing that illusive 'CONNECT' in a terminal session.
If you're a geek, there are few things as exciting. Some may become criminal, political and dangerous, but they may never have got that far if they couldn't start with easy targets and by picking off the low-hanging fruit. Which is why your best recourse is always cold, sober security.
To do this, of course, you need to be fully aware of any vulnerabilities. You need to keep systems up-to-date and know where to look for any incursions. Security mustn't be treated as a state secret and the holders of those secrets can't act like members of a clandestine lodge.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Keeping things secret doesn't make things any safer. Reading documents about breaking into a system doesn't make you a criminal in the same way that trying to imagine how a burglar might scale your garden fence and cross the garage roof makes you a thief.
White hats are essential
You need to be better prepared than the crackers. And, to be better prepared, you need to know how they're likely to attack by both learning about their techniques and penetration testing them with your own hardware. You need to wear a white hat.
If you're looking for proof that this approach works, there's one obvious example: open source software. It wasn't so long ago that operating systems and the software that ran on them were becoming increasingly secretive and proprietary. Many of us got into Linux because we didn't want to pay for a simple development environment on Windows, for example, and Red Hat - the open source giant that's just announced $1.13 billion of revenue - has built its business into the vacuum left by closed development.
However, the open source model has changed attitudes, not because it's cheaper (because it probably isn't), but because you get a better product. Not just functionally better, either. It's better because you know, or can discover, its provenance.
Like open security, the scrutiny of many coders checking and using code has helped make open source innovate and secure, from Google's immense network to Facebook's new hardware designs for its open source data centre. Security, after all, is about finding bugs in your system, and the best way of doing that is by using as many people as possible.
Red Hat has an internal mailing list where it shares almost all strategy ideas and planning, with just a few exceptions being made where it can't legally make an announcement. Of course, some would argue this gives their competitors too much information - just like a security guru advocating closed systems and the restriction of research. But Red Hat isn't suffering from this strategy, it's thriving from it.
It immediately has a pool of 4,000 employees at its disposal, pawing over these new ideas and discussing their pros and cons. The result is that better ideas emerge into an environment already primed for their germination, so that, even if Red Hat's competitors did take a few, they wouldn't be able to produce as good results.
This is what Red Hat has always done with its software, with its strategy and with its security, and it's the mark of a system that can be proven to work. Doing the same in the cold and dark, in the hope that no one will notice, is waiting for trouble. And, as with most things, it's much better to face up to the task, in the open, knowing you've nothing to fear.