Bigger than Linux: The rise of cloud native

Better storage

Outside of the three current pillars, there are the emerging security vendors, says Philips. “And Kubernetes is starting to build in stuff to make it possible for the compliance officers inside of these companies to do their part of the job; make sure that application developer mistakes don’t turn into organisational mistakes.”

An example of the mistakes that could occur was vividly demonstrated by Liz Rice, software engineer and technology evangelist for Aqua Security, in her keynote. Her main point was not that containers are wide open, but rather that the default settings can create unforeseen opportunities. For instance, most Dockerfiles are run as root. According to Microbadger, the project that enables you to inspect Dockerfiles hosted on DockerHub, 86% don’t have a user line and are therefore running as root by default. This can be fixed by making changes to the Docker image itself so they run as non-root. She demonstrated this with an NGINX Dockerfile by binding to a different port, changing file permissions and ownerships.

Liz Rice, software engineer and technology evangelist at Aqua Security, spoke about the pitfalls of leaving default settings in Dockerfiles as they are.

Liz Rice, software engineer and technology evangelist at Aqua Security, spoke about the pitfalls of leaving default settings in Dockerfiles as they are. (Image credit: Cloud Native Computing Foundation (CC BY-NC 2.0))

Running containers as root isn’t necessarily an issue, but as Rice says: “You might not think that anything is going to happen, but nobody thought Meltdown or Spectre was going to happen, right?” If a future vulnerability enables an attacker to escape a container with root then they can do what they like on the host machine, which is an unnecessary risk. 

Rice also went on to demonstrate that there’s nothing to stop someone from mounting a root directory in their host so it’s available in a container. It’s not a smart move, she admits, but at this low level it’s the fact that it’s available at all that’s the issue. This enabled Rice to change entries in the manifest to create a pod for mining crypto-currency all without a service account and credentials of any kind. 

Rice says there’s work in progress to support rootless containers and username spaces, but as you’d expect from someone working for a commercial security company, she did say that there are plenty of extra paid-for measures for auditing containers during build and runtime.

In a different approach, Google’s Craig Box announced that the company was open-sourcing gVisor, a sandboxed container environment. Companies are looking to run heterogeneous (mixed CPUs and GPUs) and less trusted workloads and this new type of container appeals to that as it’s designed to provide a secure isolation boundary between the host OS and the application running inside the container. 

Box says that gVisor is used for “intercepting application system calls and acting as a guest kernel all while running in userspace.” He demonstrated this on a VM that was vulnerable to the Dirty CoW exploit, where an attacker had managed to change the password file in a container. “The exploit is causing a race condition in the kernel,” Box explained, “by alternating very quickly between two system calls and that will eventually give it access.” However, even though the container had the correct permissions to make the system calls, you could see that runac, the runtime gVisor, had stopped them and the exploit hadn’t worked.

There wasn’t much elaboration on what better storage would entail from Alexis Richard, during his future-gazing keynote for the Technical Oversight Committee, except to say that the CNCF “weren’t done until it can feed storage into the platform.” 

Speaking to Michael Ferranti, VP, product marketing at Portworx, a company specialising in persistent storage for containers, he sees storage as the vital missing piece of the cloud native puzzle. 

The community may be excited about transforming enterprise IT from a VMware-based virtual machine model to a container model, but the people sitting on the boards of global enterprises don’t care about that: “What they care about is getting faster to market with applications,” Ferranti explained. ”I need to make sure that my data is secure [is what they will say]. I don’t want to read about my company in data breach in the Wall Street Journal. I need to make sure that wherever my user are they can always access my application. What containers and microservices enable is solving all of those problems.” 

But according to Ferranti, quoting Gartner, “90% of enterprise applications are stateful, they have data – it’s your database, your transaction processes. So if you can’t solve the data problem for those types of applications and for containers, you’re only talking about 10% of the total deployable applications in an enterprise that can actually move to containers. Now that’s not a transformation; that’s an incremental add-on.”

(Image credit: Cloud Native Computing Foundation (CC BY-NC 2.0))

The problem with storage is that data has gravity: moving petabytes of data from one location to another takes a lot of time. It also exposes data to risks during transport and because it’s hard to move, you tend to run your application in one location. Ferranti says this is what happened to Amazon essentially: it had a lot of problems with its east region at one particular point in time, so lots of people had outages because they were dependent on that region.

Ferranti say that Portworx makes it possible to run applications, including mission critical data, in multiple clouds and hybrid clouds between environments, which means you can have a copy in one location as your production system and a disaster recovery site in another place. It seems to be doing well from its early adoption of containers too, picking up business from corporate giants such as Comcast, T-Mobile and Verizon.

However, the issue, or one of them at least, that the CNCF has is that typically persistent storage systems have existed outside of the native cloud environments creating the potential for vendor lock-in from provider managed services and although Alexis Richardson didn’t mention it in his keynote, he was likely thinking of Rook, the distributed storage orchestrator, as a major part of the solution. 

Rook was given an early, inception stage status by CNCF in January of this year and the CNCF has indicated that Rook is focused on “turning existing battle-tested storage systems, such as Ceph, into a set of cloud-native services that run seamlessly on-top of Kubernetes.”

Now Ceph is a distributed storage platform, which has one particularly significant characteristic and that is as more units are added to the system, the aggregate capability in terms of transactions, data in and out of IOPs (input/output operations per second) and its bandwidth continues to expand.

In December of last year, Allen Samuels, advisory board member for Ceph, said that the community are deeply involved in a redesign of the lowest level interfaces of Ceph. This will remove it from being on top of the filesystem. So instead of using a native filesystem, it’s going to use a storage block and manage that itself. As Rock is seeking to provide file, block, and object storage services that feeds into Kubernetes that makes a lot of sense.

Chris Thornett

Chris Thornett is the Technology Content Manager at onebite, editor, writer and freelance tech journalist covering Linux and open source. Former editor of Linux User and Developer magazine.