It wasn't so long ago that the PIN and personal password were your guarantee for secure internet banking. Then along came digital signatures and personalised images or phrases to ensure that the website is genuine, and the addition of single use Transaction Authentication Numbers (TAN), plus two factor authentication, where the TAN is generated by an individual security token or independently transmitted by email or SMS. Then chipTAN generators added transaction data to outwit man-in-the-middle attacks, and now there are calls for a further layer of biometric identification for added security.
Does all this mean that, year-on-year, the public is growing ever more confident of the safety and security of internet banking? Probably not – any more than a house surrounded by a high wall with razor wire, electric fencing, motion detectors, security cameras and armed response warnings makes you feel confident that this must be a safe neighbourhood to live in.
Layer upon layer
Adding many layers of security is the obvious bit – the criminal may have discovered my PIN code and got a bank statement from the refuse bin, but still might not be sure about my birth date and mother's maiden name.
When there is a certain amount of human interaction, as in telephone banking, you can even allow a bit of leeway on getting these answers exactly right. Sometimes the call centre asks for more details than I can provide: I have remembered to take my debit card and PIN, reminded myself of all my security answers – and then they ask for the amount of a monthly standing order and I simply cannot remember. But does that mean they will slam the phone down on me? No, they go on asking other questions and see how I manage.
Even though I failed one security test, I get another chance because a human operator has time and the social skills to judge how I react to being told I have failed a test, how I explain or justify my failure, and how I respond to further questioning. A human operator has a human brain that can make very many more subtle decisions based on further layers of information. It can also be wrong.
If, however, the whole transaction takes place via a keypad, there is vastly less corroborating data and greater reliance on mechanical answers. If the PIN or keyword is wrong, it is wrong, and it would be unwise to allow too many further attempts – because we might be under attack from a system using an algorithm to generate a series of likely PIN numbers.
But what if the keypad entry system was so sophisticated that it could, like the call centre staff, make judgements about such mistakes – whether, for example, the entry process was a mechanised attack, or behaving like an absent-minded but genuine customer, or like a hacker trying out a series of likely guesses? Google searches, for example, are pretty good at guessing what was really meant when terms are misspelled – they don't just shut down on you.
Similar intelligence might help make decisions on whether a mistaken password was a slip or fraud and, like a human operator, it might actually identify, raise an alarm and help nail the attacker instead of simply blocking them to try again later.
We're talking the future here – artificial intelligence may be sufficiently advanced to provide some interesting screening attempts, but not yet enough to be trusted with anything as sensitive and precious as real-world customers who are paying for the bank's services.
There are, however, recent developments that could bring that future closer.