One of the most relevant misconceptions that I have encountered recently in the media is the concept of “trusted website”, especially this time of the year with articles advising shoppers to stay secure while shopping online by “only shopping or interacting with trusted websites” and looking for the well-known “secure website” logos.
Pedro Fortuna, CTO and Co-Founder of Jscrambler.
Well, the problem here is that there's no such thing as a trusted site anymore. Even with TLS, web application firewalls, native browser defenses and everything else that sits between the user and a website, there’s much else going on that cannot be effectively prevented by these security measures - especially at the client-side (i.e., everything that takes place on the browser or end-user device).
There’s a reason why cybersecurity experts advocate for the “zero trust” model. A seemingly secure website can actually be plagued with malicious activity without any visibility from the website owner or the end-users. And this becomes much more difficult given the complexity of the supply chain behind modern websites. The web supply chain problem
Exploiting security flaws
Virtually every single website out there has significant exposure to third party risk. The stats show us that the average website today has 35 different third-party components and each of these has fourth-party dependencies of its own. And when we zoom into the first-party code that’s developed internally by the company, we will find that, on average, 97% of that source code comes from external libraries used during the development.
On the web, all website components have the same privileges. So, any third-party component is free to interfere with the website in any way. Let’s look at an example and consider a global retailer with extensive cybersecurity resources. This company has an E-commerce shop and employs all the traditional security mechanisms at the server and network level. From an attacker’s standpoint, it might not make sense to attempt to breach the server of this company in order to retrieve valuable data. What they can do instead is target one of the suppliers in the company’s website supply chain.
For instance, if this retailer is running a chatbot service on the website, attackers might probe this vendor for security flaws and eventually find a way in, since this third-party provider might have fewer resources allocated to security. When attackers succeed, they can inject a malicious payload into the service being sourced by the vendor, effectively poisoning the well and sourcing the malicious code into every company using that service.
If this example sounds familiar, it’s likely that it’s ringing a bell from the Magecart web skimming attack on Ticketmaster (achieved through a chatbot), or maybe from the SolarWinds incident (a software supply chain attack that followed a similar approach). Or maybe it’s from any of the hundred other supply chain attacks that have been making headlines these past few years, as attackers understand the opportunity behind this serious cybersecurity flaw.
Today, the vast majority of companies are seriously unprepared to deal with these attacks. They have absolutely no clue about what is happening on the client-side of their websites and can’t be certain about what code is actually being served to their end-users. This is why web supply chain attacks like Magecart often remain undetected for months.
At least we’re seeing some signs of progress in the right direction. In May 2021, the White House issued an executive order to improve the U.S. cybersecurity posture following the unprecedented impact of the SolarWinds supply chain attack. This push from federal entities is likely to prompt businesses to improve their security posture and avoid relying on traditional security approaches that fail to prevent these new attack vectors.
Companies wanting to reduce exposure to third party risk and gain control over their supply chain can’t afford to do too little too late. They must take urgent action and implement additional security layers to effectively gain visibility and control over their websites. And this requires going beyond the firewall and considering everything that can happen at the client-side.
If we see this happening at a global scale, perhaps someday the concept of a “trusted website” might become the rule rather than the exception.