5 ways machine learning prevents the growing risk of fraud

An abstract image of digital security.
(Image credit: Shutterstock) (Image credit: Shutterstock)

The whole world is watching AI. On the heels of President Biden’s Executive Order and the UK AI Safety Summit drawing leaders from all over the globe, machine learning is the topic du jour. From tech companies creating their own version of AI chatbots to big box retailers implementing ML to providing shopping recommendations, everyone wants their own slice of the pie, slated to grow to 66.62 billion by 2024. It makes sense, as the potential of generative AI, a subset of machine learning, is unlimited and we only expect more industries to adopt AI into their businesses. As this democratization continues, so does the risk AI carries. It will become far easier for individuals to attempt fraud through deepfakes, advanced algorithms and other methods. But businesses are not defenseless.

Far from it, in fact. Generative AI and machine learning can end up preventing fraud, just as easily as nefarious actors use AI to commit it. These models have the immense power to learn customer behaviors, detect deepfakes, verify critical documents, and a whole lot more.

Identity verification

The power of machine learning comes into play from the moment a customer is onboarded. There are many assets to track upon interacting with customers, from personal information to biometrics. And as the number of customers rises, keeping up with the influx of data can be challenging. A problem machine learning solves. It can verify every piece of information individually and compare it against a known database, ensuring its validity and preventing fake sign-ups. AI also makes it easier to analyze a large amount of IP addresses and other digital footprints at once, implementing background checks at scale in case a fraudster slips through the cracks. Additionally, companies can develop advanced ID verification methods using a Know Your Customer (KYC) strategy, which can verify the identity and risks of a potential customer.

Vyacheslav Zholudev

CTO and Co-founder of Sumsub.

Deepfake detection

As mentioned, generative AI does have a dark side. Deepfakes have quickly become one of the most accessible and effective ways to spread misinformation, outright lies, or fabrication of evidence. In fact, a recent report indicated the number of deepfakes rose 1740% across industries in North America in 2023, when compared to 2022. Luckily, ML and generative AI can defend against this. Deepfakes are not foolproof, leaving behind telltale signs things are not what they seem. There are certain visual artefacts created by deepfakes not found in authentic forms of media. These include inconsistent facial expressions, distortions and other unnatural movements. In fact, some artefacts may not be visible to the human eye. Machine learning algorithms can identify these by looking for specific characteristics that the deepfake creation process introduces. As well, Industry wide collaboration on AI is key to developing models capable of detecting even the most convincing of deepfakes.

Document verification

Document fraud is rampant. In 2022, the FTC’s Consumer Sentinel Network took in 5.1 million reports, 46% of which were fraud. But generative AI can be trained to analyze commonly forged documents to look for inconsistencies. These models extract qualities indicative of forgery, including watermarks, stamps and other clear signatures. Comparing passports, drivers’ licenses or as many as 14,000+ other identification document types against reference data can reveal instances of fraud, flagging the forged documents and rejecting them from submission. A large part of official documentation comes via signatures, a historically popular method of fraud. Like fraud, machine learning compares signed documents to reference signatures. Going far beyond the surface level, the algorithm analyzes stroke patterns and pressure – unique features of a real signature – quickly identifying and flagging the falsification.

Transaction monitoring

With e-commerce and other methods of online payments being more ubiquitous than ever, transaction fraud is bound to occur. Damaging the company eventually, fraudsters who use stolen card numbers to make purchases end up causing expensive chargeback requests. These quickly add up and drain valuable time and resources from businesses in the process, as they may reach $100 or more for a business, depending on the transaction. ML comes to the rescue once again, dramatically improving the odds of catching a falsified attempt or purchase. Machine learning also comes into play to combat a recent trend: Money muling. In money muling, individuals who appear to be innocent, known as money mules, are recruited to transfer illegally obtained funds. Algorithms can process and detect anomalies in individual transactions, customer profiles and even historical trends. By training on known fraudulent transactions, the models spot patterns indicative of transaction fraud, money muling and more.

Analyze, track and attack

At its core, AI is a relentless task machine. Analyzing data, tracking it against existing information, and flagging when there is cause for concern. Companies can use this to their advantage when tackling fraud. Frequently, fraudsters and legitimate customers alike engage in something known as promo abuse fraud, in which individuals misuse a company’s promotional materials, such as referral vouchers, to take more than their fair share. AI thwarts this by tracking IP addresses, device fingerprints and user behavior footprints to ensure multiple accounts are not all created from one place. Even when accounts are laced with authentic information making them harder to detect, AI’s ability to check for telltale digital footprints is unmatched.

Businesses are not powerless in the face of ML or generative AI. This technology is more advanced than ever, and it can be an incredibly useful tool to tackle the significant increase in fraud companies face. By utilizing the above techniques, enterprises can be certain they are at the forefront of protecting themselves from nefarious actors. As the saying goes, if you can’t beat ‘em…well, you know the rest.

We've featured the best encryption software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Vyacheslav Zholudev is CTO and Co-founder of Sumsub.