How to finally secure the software supply chain

A padlock against a black computer screen.
(Image credit: Pixabay)

Cybersecurity has always been about prevention – making sure what shouldn’t happen, doesn’t happen. It means putting measures in place to ensure a worst case scenario can’t escalate, even if all the factors are in place for it to happen. Doing this with software is much more challenging, because while you can generally show software works as required, there’s not really a way to prove it isn’t doing something extra in the background that’s unintentional.

About the author

Dr Simon Wiseman is CTO of Forcepoint Global Governments and Critical Infrastructure.

When the software consumer signs up for a license and starts using it, if it functions the way they need they’re happy. But just because it works for them, doesn’t mean it’s secure. It just means they’ve not yet used it in a way that makes it go wrong and potentially causes damage. A feeling of security can often come from using popular, widely-used software, because the more users there are the more likely it is that someone will have done something wrong and exposed any security flaws, allowing them to be fixed.

Unknown authors, unknown risks

This may however be a false sense of security. Why? Because today’s software is a layered, multi-authored product. Back in the dawn of computing, a piece of software had one single author. However, with the arrival of the first assembly languages in the 1940s, a piece of software was used to write new software, with the end result being a joint product of both authors.

Moving forward to today, and we find those basic tools have expanded into complete ecosystems of software produced by a myriad of authors. The functionality demanded by users and modern computing environments requires complex software, which can only be created using tools and components that others have produced.

Commercial pressure also demands that the whole thing must be delivered quickly, so even simple functions need the same treatment. The net effect is that nobody really knows what any of the software does, and if a flaw is discovered it can be really difficult to pin down who its author is. So software vendors can never be sure their product doesn’t have any undesirable backdoor functionality or other flaws that cybercriminals can exploit.

Businesses are now made up of all kinds of individual components and technologies, from workstations and mobile devices, to data stores and collaboration services, sensors and actuators – all connected by the public Internet. Every business is supported by a supply chain of other businesses, as well as being a link in the chain to a final customer. Any component in this long chain of software can be reached by anyone through the Internet, and vice versa. Strong authentication and access controls can keep attackers away, but they don’t work if the attacker has a backdoor in the software of one of these components, because they can then get on the inside, and work to remain undiscovered.

Take some of the most prominent cyberattacks of the past few years, and chances are it will have involved supply chains. From Stuxnet to NotPetya and Sunburst, software code reuse was the problem. With just one component from one supplier compromised, countless organizations are left vulnerable to attack.

Verifying integrity

Proving that the complex software a vendor provides is free from backdoors, flaws and vulnerabilities is never going to be a perfect science. Initiatives like Google’s SLSA is one example of a way forward for vendors, allowing them to vouch for the integrity of software artefacts throughout the software supply chain in a measurable way. Vendors should also introduce authentication and access controls into their development process, managing writing, building and testing using a repository to make sure everything is traceable. When source materials change, the repository makes it possible to identify the user responsible.

In practice this isn’t enough, because inevitably software authors won’t be writing everything from scratch themselves. Externally developed software, whether it’s used as tools during the production of the new software, or incorporated into the final product, may still contain backdoors or flaws that make the product unsafe. Fully resolving this challenge will require tools like package management systems, to impose some control over the process of importing and integrating third party code and making it auditable.

Living with code reuse

As well as implementing more rigorous environments for software development, organizations should also be looking for ways to close other potential loopholes into their networks and data that cybercriminals could use. Boosting network security by moving to zero trust, and requiring multi-factor authentication, adds extra barriers that can help screen out potential malicious actors. Implementing technology like Remote Browser Isolation (RBI) and Zero Trust CDR can be particularly useful in a hybrid working environment, as they protect desktop devices from web threats and ensure downloads are malware free.

The complexity and speed of modern software development means that code reuse is the norm. Third party code libraries are simply too useful for them to be abandoned. But at the same time, businesses cannot ignore the threat posed by flaws in this system, hoping others will be hit first and the attacks neutralized before it hits them. Finding ways to securely live with this reality means closely examining both the development and distribution processes they’re using to create and ship software, and proactively build in ways to provide assurance and integrity.

The crucial mark of success of any tool to verify the integrity of software code will be that consumers use the information produced by these frameworks to make buying decisions. This market pressure and competition will then force a pace of change, insisting that software is designed and built in increasingly more secure ways.

At TechRadar Pro, we've featured the best business VPN.

Dr Simon Wiseman

Dr Simon Wiseman is the Chief Technology Officer of Forcepoint Global Governments and Critical Infrastructure. Simon brings over thirty years of experience in Government computer security.