Generative AI: Risks and solutions for a safer digital landscape

A digital face in profile against a digital background.
(Image credit: Shutterstock / Ryzhi)

The impact of generative AI on everyday life is being felt across industries and daily activities. Open AI’s ChatGPT, of which we’ve likely all heard about in recent months, gained an unprecedented 1 million users within a week of launching. Google has recently contributed to the growing buzz by launching 10 new generative AI courses to upskill a new generation of workers. Our latest research backs this up, revealing that over two-thirds (67%) of consumers worldwide are now familiar with this cutting-edge technology. But despite many early achievements, generative AI brings both promising potential and significant risks, especially in the realm of digital identity. The advancements in AI technology have made it easier to create and spread persuasive fake news and disinformation, blurring the lines between truth and falsehood.

Philipp Pointner

Philipp Pointner is Chief of Digital Identity at Jumio.

Spreading disinformation with AI

In the past, large-scale disinformation campaigns needed extensive resources and a coordinated effort from multiple individuals. Thanks to generative AI, the creation and spread of compelling fake news stories, social media posts, and other types of disinformation have become more accessible and cost-effective than ever before. These systems have reached a level of maturity where it’s almost impossible to tell whether content has been created by a human or not.

By taking advantage of the network effects of social media platforms, disinformation can quickly spread and reach millions of people. This can lead to significant consequences, such as the AI-generated deepfake image depicting an explosion near the Pentagon. This image went viral on Twitter and within minutes, the S&P 500 stock index plummeted by 30 points, resulting in a staggering $500 billion loss in market capitalization. While the markets recovered once the image was confirmed as fake, this incident demonstrates the potential harm that deepfakes can inflict and highlights the need for vigilance in facing these issues.

The spread of disinformation is further amplified when social media platforms lack robust identity verification checks during the account creation stage. In such cases, it’s incredibly simple for one individual to create a multitude of fake accounts which can disperse disinformation on an unprecedented scale. The absence of rigorous verification procedures not only facilitates the circulation of false narratives but decreases individual trust in information online, which can lead to an overall reduced engagement with digital platforms.

Unleashing new levels of fraud with generative AI

As well as spreading disinformation, generative AI also creates new avenues for fraud and social engineering scams. Previously, scammers relied on scripted responses and basic chatbots to interact with their targets. Responses often lacked relevance, were poor imitations of human interaction and failed to deceive potential victims. However, with the advancements in generative AI, scammers can now emulate human interaction with exceptional precision and authenticity.

By harnessing the power of large language models, AI-powered chatbots possess the ability to analyse incoming messages, comprehend the conversational context and generate responses that closely resemble human-like interactions, free from the markers that once gave away older chat scams. This allows scammers to extract information from their targets with heightened effectiveness and persuasion.

What’s more, scammers can exploit the trust and vulnerability of individuals by impersonating the voice of people they know personally or publicly. And while our research shows that 43% of UK consumers claim to be aware of AI's potential to create convincing audio deepfakes to deceive and extract sensitive information or financial resources, UK Finance reports that impersonation scams like this still cost the UK £177 million in 2022.

Leveraging digital identity to mitigate risks

One effective strategy in combatting these challenges is leveraging AI for identity verification at account opening and for continued authentication. Online service providers and social media platforms can implement multimodal biometrics, combining various biometric data like voice and iris recognition with machine learning algorithms. This enhances the accuracy and security of identity verification processes and also enables the detection of face-morphs and deepfakes through liveness detection, providing an added layer of protection against emerging threats.

In the event of a compromise, the providers equipped with multimodal biometric systems can detect and prevent unauthorized access to existing accounts or the creation of fraudulent ones, effectively halting the progression of stolen data. In this sense, AI-powered solutions can help mitigate the risks associated with generative AI and reinforce trust and confidence in the digital realm.

Encouragingly, our findings reveal that UK consumers welcome biometric identity verification. A notable 71% of respondents said the technology is needed when it comes to accessing financial services accounts online. Similarly, 63% emphasized its necessity for online healthcare services, while 54% emphasized its importance for accessing social media accounts. By embracing AI-powered solutions and focusing on robust digital identity practices, organizations can help to create a safer digital environment for all.

The rise of generative AI presents both exciting possibilities and profound risks. To counter these threats, organizations must take a proactive approach to embrace AI-powered solutions and robust digital identity practices.

We've featured the best online cybersecurity courses.

Philipp Pointner is Chief of Digital Identity at Jumio, the leading provider of AI-powered identity verification.