One of the Linux Foundation’s newest initiatives is LF Edge. Launched at the start of the year, LF Edge is an umbrella initiative that hosts over half a dozen projects that are working together to develop an interoperable framework for edge computing. You can gauge the importance of the project by running through its list of members, which includes major chipmakers such as Intel and Arm, telecommunications companies like Ericsson and AT&T, and device vendors like HP, IBM, Samsung, Seagate, and more.
Erik Nordmark is the Chief Architect of Zededa, Inc and also serves as a member of LF Edge’s technical advisory committee. Zededa joined the LF Edge initiative by open sourcing and contributing Project EVE (Edge Virtualisation Engine) to the group.
- What impact will IoT edge computing have on the data center market?
- Nvidia reveals first AI platform for edge devices
- Michael Dell: Why edge computing could be the next big thing
There already are millions, if not billions, of computing devices on the edge that generate an immense volume of data. In the video Erik argues that the edge is already here, it’s just not connected yet. The data is being generated at the edge and there's tons of it, says Erik: “They estimate that this will likely overwhelm what is being generated to produce cat videos and other things that happen today by a factor of hundreds, thousands, orders of magnitude. So you can't realistically send all of that to the cloud.”
The other thing, Erik explains, is that in some cases you want the devices themselves to be able to react to the data, and to act on it locally. It could be because you want the device to continue to function even if it loses the connection to the Internet, or because you want the device to be able to react in an extremely short duration of time, which wouldn’t be possible if it were to send the data elsewhere over the Internet.
So it all boils down to being able to analyse the data generated at the edge in a timely and efficient manner. Erik tells us that this is exactly the puzzle that Project EVE helps solve.
The real challenge
Going forward, the two big challenges for edge computing that Erik highlights during our conversation, are security and interoperability.
One of the objectives for the LF Edge initiative is to figure out how to establish secure connections to the devices on the edge: “So traditionally, the way we built compute right from sort of servers to laptops, even to phones, is based on human beings sitting on a device and logging in directly to the device. And the way this stuff manifests itself in many cases is that people have usernames and passwords.”
This doesn’t work on the edge because the devices are distributed and one doesn’t interact with them directly: “You interact with the data, you interact with the applications you deployed, but there's no reason for you to log into that device. So that's something that we built into EVE from day one.” Giving a broad overview of how EVE authenticates and identifies devices, Erik says that the devices have device certificates that are rooted in the hardware by using Trusted Platform Module that help establish the device’s identity. The users then interact with the controller, which then for instance, instructs a particular device to deploy a particular application: “There are many aspects to the security puzzle here that matter and not only because it's physically distributed, but also because the stuff is sitting at the cyber physical edge.”
This also ties into the other big challenge for edge computing, which is interoperability. Erik says that there's quite a lot of work required to make the kind of deployment use-case that he describes above easier from an interoperability perspective. That’s because the hardware in the devices on the edge is very diverse. It’s no surprise then that developing applications that work across all the devices on the edge, perhaps even without a connection to the Internet, is one of the top priorities for the LF Edge initiative.
- We've also highlighted the best cloud computing services