Why organizations mustn’t ignore the threat of voice fraud

Representation of a criminal - phone fraudster
(Image credit: TheDigitalArtist / Pixabay)

Even with a multitude of digital communication channels at our fingertips, voice remains one of the most important (and natural) ways for people to connect with others. People make a number of calls every day, be that for work or simply catching up with friends and family. However, as with all forms of useful technology, telephony is continually targeted by fraudsters looking to exploit it.

About the author

Dr. Nikolay Gaubitch is Director of Research at Pindrop.

Phone crime has an obvious appeal over direct physical crime as the offender is well out of harm’s way and the more skilled criminals make themselves very difficult to trace. Still, it’s fair to wonder why fraudsters like to use the phone as a tool in a digital era when cyber criminals can reach limitless targets online.   

So, what can be done to stop fraudsters in their tracks? 

Why has the telephony channel become such an appealing tool for criminals looking to commit fraud?

Despite the array of digital channels available to fraudsters these days, many often favor the telephone channel over other means of communication due to its sense of anonymity and the ability to apply effective social engineering tricks on call center agents. What’s more, we frequently see fraudsters using different channels depending on what information they are looking to gather. Quite often fraudsters will carry out reconnaissance via the telephone to facilitate fraud in other channels, such as online. This approach is what we call omnichannel fraud, of which voice plays a big part and is often an early indicator of fraud on other channels.

What techniques do these fraudsters use to target organizations over the phone? What are the common themes?

Commonly, fraudsters bank on social engineering techniques to aid them in their goal of tricking call center agents or consumers into handing over information. We’ve seen fraudsters gathering information through online or telephone scams or buying items such as stolen credit cards from the dark web. After which they use the telephony channel and call centers not only to verify that their stolen information is correct, but also to impersonate someone and trick the call centers into carrying out fraudulent transactions. We’ve even seen some fraudsters go so far as to carry out what we call intercept attacks where the fraudster is on the phone to both a bank’s call center and the consumer they are trying to impersonate. Through this technique, fraudsters are sometimes able to gather the relevant data in real time, getting the answers from the consumer and using them to authenticate themselves on the phone to the call center.

Another technique that has become popular with fraudsters, especially throughout the pandemic, is using the IVR for account mining or reconnaissance. IVR, or interactive voice response, systems are a great tool for call centers to lean on to handle large call volumes, essentially allowing them to efficiently triage callers and connect them with the right person. Unfortunately, the tool can also be helpful for fraudsters who can use data gathered from the IVR system to perform social engineering attacks on agents, giving them the information to authenticate themselves and allowing them to carry out further reconnaissance on a target account. Taking that one step further, some fraudsters will use their own automated setups to call and interact with IVRs, essentially letting the two machines talk to each other and do the work for them. Unless the IVR system has safeguards in place, the fraudster can, for example, simply have their own system repeatedly guess numbers until it gets a correct passcode.

How can artificial intelligence and machine learning help organizations to combat voice fraud?

When combating voice fraud, we need to consider two sides of the coin. One side represents the detection of fraudulent activity, while the other represents not compromising on good customer experience. Meeting fraudsters head on without impacting the experiences of genuine callers is not something humans can accomplish alone. This is where artificial intelligence (AI) and machine learning come in.

Rather than relying on humans to monitor for signs of fraud across the many calls they take a day, call centers can install an anti-fraud solution that runs on AI and machine learning. The technology can sit in the background and listen to each call, familiarizing itself with the voices of known fraudsters and becoming smarter about fraudster behavior. These machines learn what fraudsters sound like and how they behave, and use this ability to identify fraudsters. Call centers can also use technology to authenticate users. This means the voice of a known customer may be enrolled and later verified which provides additional confidence that the agents are speaking to the right person. Not only does this provide an extra layer of security by carrying out a job human workers cannot do alone, but it also allows call center agents to make each call more personal, helping them to boost the levels of customer service. 

As we move into next year, what does the future of the fraud landscape look like? Will we see any new tactics emerge and will organizations need to change how they defend against fraudsters?

In 2021 we saw a large increase in remote working and reduced face-to-face interactions. This naturally resulted in increased traffic into call centers and for a long while it led to long call waiting times. One surprising effect of this was the reduction in fraudulent calls. However, it seems that fraudsters are starting to return to normal working habits just like the rest of us. The last couple of months have shown a steady increase in fraudulent calls. Based on this observation and past experience, in 2022 we can expect fraudsters to return to targeting call centers in the same manner as prior to the pandemic.  

What’s more, this year we’ve already seen deepfakes on the rise and it’d come as no surprise if this continued or even increased next year. In addition to that, there’s also other technology in the form of voice synthesis (making a machine sound like somebody) and voice conversion (making a human speaker sound like someone else) that organizations should be aware of. These techniques are not so well-known to the public because of the limited real-word applications available today, however it is a very real threat and a tactic we have already seen fraudsters adopt, for example, the recent $35 million bank heist. 

With fraudsters looking to hone their skills and capabilities to create both deepfakes and voice synthesis, I predict these technologies will only increase in popularity as we move into 2022. It is therefore vital that businesses be aware of these new techniques and adopt the appropriate technology to combat them.

We feature the best VOIP providers.

Dr. Nikolay Gaubitch is Director of Research at Pindrop.