Understanding the human factor of digital safety

Artificial intelligence India
(Image credit: Google)

Technical security vulnerabilities represent only part of the risks in the online world. Unfortunately, in the security chain, humans are often the weakest link; social engineering is the practice of taking advantage of people’s natural vulnerabilities to carry out malicious acts. 

Cyber criminals rely on interpersonal manipulation and deliberately exploit character traits and emotions such as helpfulness, trust, fear, or respect for authority to feign a personal relationship with the victim. In this way, cybercriminals entice their victims to disclose sensitive information, make bank transfers or install malware on their private PC or in their employer's corporate network.

Why is it important to strengthen human centric digital safety?

Traditionally, the focus of cyber security has primarily been on closing technical security gaps. We are making progress every day in optimising our technologies to defend our customers against cyberattacks. However, those threats that directly target people – such as social engineering threats – are still massively neglected. Other examples include cyberbullying, life-threatening Internet challenges, the spread of fake news, and undesirable side-effects of personalisation algorithms. 

The consequences are severe: in the UK 4 in 10 children aged 8-17 (39%) are reported to have experienced bullying, either on or offline, with 84% of the bullying more likely to happen on a device than face-to-face (61%). Damage amounting to a staggering £2.3 billion was caused to the British economy by cyberattacks in 2022, and research by AAG found that 82% of breaches against businesses were a result of issues relating to human error or social engineering. On top of this comes the influencing of not only people, but institutions and nations, such as the swaying of elections and politics through fake news and deep fakes.

This is where Human-Centric Digital Safety (HCDS) becomes an increasingly important lever for making the internet a safer place. Through this approach, we focus on the “human factor” of digital safety by creating technical tools to aid and protect people in difficult situations.

Viliam Lisy

Principal Scientist, AI at Gen.

Why is it that cybersecurity companies like Avast can successfully combat HCDS threats?

There are several good reasons why cybersecurity companies should be committed to HCDS. For Avast in particular, the main reason is probably our independence. First, we are completely committed to helping keep our users as secure online as possible – we are experts in that. So, we don't have to worry about engagement rates or advertisers, as is the case with the big e-commerce or social media platforms. In addition, we can rely on anonymized data from more than 500 million customers worldwide. 

This means that our Threat Labs can identify specific patterns of fraud across several networks and misinformation at an early stage and quickly protect our users. In addition, our customers benefit from the expertise and experience we have gained over the years in combating cyberattacks, so that we can regularly provide them with sensible and easy-to-implement rules of conduct to protect themselves against social engineering. And who can better combat online threats of all kinds and inform people about them than those who deal with it every day?

How can artificial intelligence (AI) protect against social engineering?

Personalization algorithms or deep fakes have shown that some dangers on the internet are essentially based on artificial intelligence. But AI can also successfully help fight cybercrime. For example, tremendous advances in the field of natural language understanding by computers make it possible to automatically explain why a certain message is likely a scam, detect toxic or coercive conversations, or verify that a claim is not true based on trustworthy evidence.

In the future, this could be used increasingly to combat fake news or hate speech on social media platforms, for example. Algorithms are constantly improving their ability to derive complex emotional states. Therefore, AI could be used in future to automatically explain why a certain message is a scam, to detect toxic or coercive conversations or to verify that a claim is not true based on trustworthy evidence. AI systems now perform such tasks even better than systems that still rely on human decisions.

With this in mind, AI offers various solutions to protect against social engineering, helping us manage a sheer mass of digital threats whose distribution is amplified by automatic text and image generators. What's more, AI can keep up with the pace of the Internet. YouTube videos go viral within hours, generating clicks in the millions; AI systems can scan comment columns quickly, efficiently, and reliably for problematic content.

Progress is also being made in communication with users. So far, we have only been able to warn them in the case of technical threats like malware, but not when it comes to pure HCDS threats, chatbots are getting better and better at carefully explaining these threats to the users.

We've listed the best antivirus software.

Viliam Lisy, Principal Scientist, AI at Gen Digital.