As the professor of security engineering at Cambridge University, Ross Anderson is one of the founders of security economics as an academic discipline.
He's written what Bruce Schneier says is "the best book on the topic there is" about security engineering [read the book], and given evidence to the Home Affairs Committee inquiry into the extension of time that the police can detain Her Majesty's subject's without trial.
Perhaps more impressively, though, he wants your ISP to send you cash every time you get spam.
Linux Format magazine caught up with him for a chat...
Linux Format: With all the time and money spent on security, why does it go wrong so often? Or is it just that the examples we read about in the media are the exceptional cases?
Ross Anderson: We understand the dynamics of bugs and vulnerabilities fairly well now from an economic point of view. We know, first of all, that the vendors don't have a proper incentive to ship good quality software, because the vendors don't pay the cost of failure; we do. That's something that the car industry took two generations to fix. It took litigation from 1917 to about 1965 to establish the principle in America that the car maker is responsible for design errors. Before that, the car makers just said: "So you got run into by a car, and you got hurt – sue the driver, and let him sue the person he bought the car from if he thinks the car was defective." And so on, back through the chain. Breaking that, and establishing vendor liability, took a whole human lifetime after the car was invented. It's going to be a similarly big task in software.
LXF: But surely there will always be bugs in software. How do you determine which are significant?
RA: Yes, since software is big and complex, there are bugs; there are statistically many bugs, and you can predict how many you'll get using statistical tools. So you end up having a process, such as regular patching, and you then have some idea of how many vulnerabilities will arrive in any one month, and how many will be fixed in any one month. You can get numbers for how many people get hacked as a result of vulnerabilities that vendors hadn't patched, although they could have. And then of course there's the other cases where somebody has discovered a vulnerability that wasn't reported to the vendors yet, so you've got a zero-day exploit.
Then you can look at it from the point of view of the economics of the people doing the patching. If you're a company, how much effort do you put into applying patches quickly, and what's the risk? There's a whole series of economic equilibria involved in decisions as to software quality, preconceived patching, diligence of application of patches, and so on.
LXF: That seems to put the blame on the software vendors rather than the hackers…
RA: In the criminal underworld, there's a set of separate economic forces that determine what the exploitation pattern will look like. What, for example, are the economics of running a botnet? Well, we know that when machines are captured, typically hackers do such high-value exploits as they can – keyloggers for bank data, and that sort of thing – and then they go down the food chain. Compromised machines may end up being used to send spam, and then once they're blacklisted by all the spam filters, they'll end up being used for distributed denial-of-service attacks.
There are all sorts of places in the chain where there are potential control points, where reasonable amounts of pressure haven't been applied. At present, if I get spam or a phishing email from an infected machine and I report it back to the ISP, then if the ISP is a small to medium sized firm, they'll usually fix it fairly quickly. Within a matter of an hour or so, they'll have that machine isolated into a walled garden, from which the user can get hold of anti-virus software, but not much else. The reason for this is that if you're a small ISP, and a machine starts sending a lot of spam, it screws up your peering arrangements. However, if you are a big ISP, you don't care.