What can be done to promote trust in electronic banking systems?

So what can be done right now to increase trust in banking systems?

Today's most advanced automated security tests throw every known attack at the system under every likely operating condition and – being cloud-based – the tests are kept up-to-date with new attacks as soon as they are recognised. This is a powerful solution for reassuring the bank's management that their systems are indeed secure and trustworthy, but it is hard to explain this to the customer in a way that builds their trust. They might even wonder why – if the system was properly designed in the first place – does it now need so much additional testing?

The human factor in telephone banking raises the question of whether better trust might be built around a more organic test approach – one that builds up layers of testing that are not so rigidly defined. You could describe these test criteria as being "fuzzy", meaning that the correct responses are not so sharply delineated around the edges. The point is that today's sophisticated test procedures do include a form of "fuzzing testing" as a way of addressing unknown security threats.

Fuzzing testing bombs the system – anywhere where applications and devices receive inputs – with semi-random data instead of known attack profiles. This is one way to find if any irregular input can crash or hang an application, bring down a website or put a device in a compromised state – the sort of thing that might happen when someone inputs a letter 'O' when it should have been zero, or accidentally hits an adjacent key.

Zero-day attacks

Another goal of fuzzing testing is to anticipate "zero-day" attacks – i.e. those that hit you before they hit the news. Hackers assume that you have thoroughly tested your system with traditional functional testing, but there are so many permutations of invalid random input that may not have been tested.

As David Newman, President of Benchmarking Consultancy Network Test, explains: "Attackers have long exploited the fact that even subtle variations in protocols can cause compromise or failure of networked devices. Fuzzing technology helps level the playing field, giving implementers a chance to subject their systems to millions of variations in traffic patterns before the bad guys get a chance to".

All it might take is one random string of input to cause a crash or hang, and so hackers use automated software to keep throwing random input at your network in the chance of striking lucky. "It takes a thief to catch a thief", as they say – so fuzzing testing does the same thing, but under controlled conditions. Again, such fuzzing testing relies heavily on automation to get sufficient test coverage. Today's fuzzing test tools generate millions of permutations – not only making the network much more secure, but also saving manual work and keeping the testing fast and efficient.

The immediate benefit of fuzzing testing is that it increases the bank's trust in its own system security. But does that help the customer to build trust?

I suggest that it does, for the following reasons. One of the things that supports trust in Google is the way it handles silly mistakes: if a user misspells a search term, Google comes up with intelligent suggestions, and that gives the feel of a well-designed system. By analogy, if a customer makes a small slip when logging in to the bank, and the system responds stupidly or even crashes, it suggests that the system is fragile, and that does not build customer confidence.

So the greater resilience to error resulting from repeated fuzzing testing does make the system seem less fragile – and that is the first step in building confidence.

What lies ahead?

Today's functional test systems can do a lot to reassure network managers that their systems are defended as well as possible against attacks and faults, but then the task is to pass on that confidence to the customer without over-explaining and sounding "defensive" in the negative sense.

Fuzzing tests go further along the same lines by adding confidence against unknown and unexpected threats, but I suggest that their application could also make the system begin to feel more solid and trustworthy to the customer.

Can we go further? Can we build into a mechanised entry system the equivalent of human intelligence that can assess the personality of the applicant and make good decisions about the credibility of their responses, and what further questions to ask? Instead of just dumbly closing down, can the system flag a danger signal and then escalate authentication with further security checks?

To the customer, such an intelligent response would suggest that the system really is alert to danger and "knows what it is doing" – as scary, and yet as comforting, as a community police officer with good local knowledge and experience.

We still have a long way to go before computers can match those skills, but recent advances in real time big data analysis could help clarify understanding of human behaviour patterns, and suggest more subtle tests to identify fraudulent behaviour. Couple that with fuzzing techniques that extend response testing to embrace the infinite variety of possible near misses, and this could point the way ahead.

Because the real challenge is two-fold: both to make the system resilient to attack and, at the same time, to build the customers' trust that the system is truly resilient.