Edge computing: a balance of risks and benefits

Hands typing on a keyboard surrounded by security icons
(Image credit: Shutterstock)

Edge computing may be a relatively new technology term, but it has not taken long for businesses to understand the benefits of locating their compute and storage resources near the user or the source of the data. By processing information right where applications are running without having to travel back to a data center located miles away, software works faster and so can do more.

About the author

Lucy Kerner is Director of Security Global Strategy and Evangelism at Red Hat.

The difference is especially noticeable for businesses where every millisecond matters, such as high-frequency financial trading, automated vehicles and equipment safety monitoring. But it’s also highly appealing in contexts where latency isn’t mission-critical, such as content streaming, manufacturing and smart utilities, since today’s users expect near-instant reaction times and an exuberance of capabilities from their applications.

In a recent survey of 1,470 IT professionals, Red Hat found that 72% of the respondents ranked the combination of Internet of Things and edge computing as a priority emerging tech workload over the next 12 months. The range of applications for edge devices is vast – whether it be IoT sensors, internet routers, wearable tech, or factory floor robots. By 2022, there will be an estimated 55 billion edge devices on the market, and by 2025 this is expected to grow to at least 150 billion.

However, the clear benefits of edge computing need to be balanced with perceived risks. Historically, cybersecurity for businesses has been about centralizing operations, meaning that for some, the distributed nature of edge seems a riskier choice. Having more devices can increase the attack surface, and given that edge devices need to be able to send information to and from their data centers, there are possible connections that could be exploited.

Holistic edge strategy

The solution is to approach security as part of a holistic edge strategy and not in opposition to it. Teams that bake security into their architecture from the start can have the edge serving as an extension of their environment, as secure and resilient as the center. In this way, security is an enabler, not an afterthought.

A key consideration is having consistency between the systems you run at the edge and the network that connects them together to your core systems. Standard security protocols and processes allow for repeatability, making things easier to manage and secure. However, the best edge devices tend to be designed for a very specific task and involve multiple software and hardware vendors. This seems to fly in the face of standardization.

This is where the hybrid cloud comes in. Hybrid cloud is a strategy that connects private infrastructure and public cloud (or multiple public clouds from different providers). This gives organizations flexibility to use whichever environment will be best for a given workload at a given time, and can serve as a common foundation on which to build an edge stack as an extension of the core infrastructure. With a unifying platform underpinning the hybrid cloud, you can scale by making use of containers and Kubernetes, while getting consistency of security settings right out to the edge, with integrated security – from OS and platform security, data security, identity and access management, application security, and more.


Turning to network security, businesses are increasingly using third-party SD-WAN (software-defined wide-area network) technology to manage their network expansion. It’s a good idea to look at using MSSP (managed security service provider) experts for network security as they are offering increasingly sophisticated SASE (secure access service edge) solutions. SASE addresses edge security at scale by integrating SD-WAN and security into a cloud service in partnership with security vendors. It gives security teams visibility across their entire network through a single pane of glass. Security policies, threat prevention, and security remediations can be defined, monitored, and executed centrally and holistically.

A consistent automation strategy across the organization is important for integrating security into processes, applications, and infrastructure from the start in a repeatable way. As with any IT automation journey, approach security automation in phases with defined goals for each phase. Many enterprises start by automating tasks that are performed repetitively - configuration management, patching, security vulnerability identification and remediation, policy enforcement, and more. Automating the security and compliance for application development, infrastructure operations, and security operations is a foundational step towards comprehensive DevSecOps, enabling you to implement continuous security and compliance with automation.

More broadly, it is beneficial to bring security out into the open. Security teams have traditionally been siloed. However, now that developers have more and more control of the application development lifecycle, security can no longer fall solely to the security teams. The responsibility should be spread across the company – you have to look at security as a team sport.

With consistency across the network, the edge becomes part of the core security strategy. By approaching both the secure hybrid cloud and the secure network as parallel priorities and choosing partners for each that can work effectively together, organizations can tap opportunities at the edge far and wide.

At TechRadar we feature the best business VPN.

Lucy Kerner is Director of Security Global Strategy and Evangelism at Red Hat.