Inside the Linux kernel 3.0

Microsoft matters

What may be more surprising is that the most prolific individual contributor to 3.0 worked for a rather different company - Microsoft.

This doesn't mean that the company has "seen the light" but that it needed additions to the kernel to enable Linux virtual machines to run on its Windows Server platform, so it was not about improving Linux but being able to sell more of its products. Whatever the reason, though, Microsoft will now find it more difficult to decry open source and GPL software.

And while we're on the subject of Microsoft, kernel 3.0 also supports the Xbox Kinect controller, which is finding great use in all sorts of non-gaming areas.

There are two things that Linux has plenty of: filesystems and software with confusing names. Btrfs adds to both scores, whether you call it 'Better F S', 'Butter F S' or 'B-Tree F S'. This is an advanced filesystem that implements some of the ideas from ReiserFS and some from Sun's ZFS, as well as covering some of the functions of volume management systems such as LVM. Even the kernel's ext3/4 principal developer, Theodore Ts'o, believes that Btrfs is the way forward.

Btrfs was considered very much an experimental filesystem when first included in version 2.6.29 of the kernel, but it has matured and Fedora has stated that it would like to make it the default filesystem for Fedora 16. Whether it actually makes that, or is delayed until 17, this is a sign of the filesystem's perceived maturity, although it is still under active development with additions at each kernel release.

What makes Btrfs so attractive is its scalability, being designed to "let Linux scale for the storage that will be available". This doesn't just mean large disks, but also working with multiple disks and, most importantly, doing this in a way that is easy to manage and transparent to applications.

Additional features, such as transparent compression, online defragmentation and snapshots, only add to its attraction.

A promise come true

Linux kernel map

Way back in 2006, a website appeared with the stated aim of creating open source 3D drivers for Nvidia graphics cards. Many of us were interested by the idea but expected it to get as far as so many of the projects on Sourceforge. There are times it feels good to be wrong.

The Linux kernel now contains the Nouveau drivers for Nvidia cards. If you are building your own kernel, they are tucked away in the staging area of the kernel configuration. This is an area, disabled by default, containing more experimental or unstable (in the sense of being subject to change, not necessarily falling over) drivers.

However, many distros now include these drivers in their default kernels and installations. While not giving the same ultimate 3D performance as Nvidia's own drivers, for most non-gaming use, these are more than sufficient, and you don't run into the problems that can occur when trying to mix proprietary, pre-compiled code with open source.

It is unlikely that these reverse-engineered drivers will ever be as fast as the ones written by people with intimate knowledge of the workings of the hardware, and the ability to view code that Nvidia is able to use but not divulge. However, they are far better than most people's initial expectations, and it is often worth sacrificing a little speed you may never use for the reliability of a properly integrated driver.

AppArmor (Application Armor) is a security module for the kernel. It allows the restriction of the capabilities of individual programs by means of a set of profiles. Assigning a profile to a program, or set of programs, determines what they are allowed to do. This means that even if a program has to be run with root privileges to do what it needs, it is not able to do whatever it pleases, a limitation of the standard user-based permissions system.

Now it is not enough for the user running the program to be "trusted". The program can be given explicit permissions to do only what it needs. Linux already had SELinux, which fulfils a similar role, but AppArmor is generally considered to be easier to learn and maintain.

This is important, because complex security systems are harder to get right, and a security system that you think protects you when it doesn't is worse than no system at all.

Born in fire

AppArmor has been around for a while. A previous incarnation, known as SubDomain, appeared in Immunix Linux in 1998. In 2005 it was released as AppArmor by Novell and included in OpenSUSE and SLES. However, it was not until the release of Linux 2.6.36, in October 2010, that it became an integral part of the kernel.

If you want to read about the nitty-gritty of kernel development, there is only one place to go - the LKML (Linux Kernel Mailing List). This is where the important work and discussions take place, but it is not a place for the faint-hearted. It is quite high traffic, with around 200-300 mails a day, and very technical. It's also fair to say that many developers are known more for their coding abilities than their people skills.

Email also encourages a more blunt approach than face-to-face communication - "if you're subtle on the internet, nobody gets it" - add in the ego factor when people are debating the merits, or otherwise, of their own brainchild, and you have an environment that is more productive than friendly.

Flamewars abound, but they serve a purpose, allowing developers to argue the case for their choice of included code. As long as the debates focus on the topics and not personalities, this remains productive.

Linus has acknowledged that even he has changed his mind and admitted that his original position was wrong in some of these debates, although not very often.

There have been some significant flamewars/debates over recent years, such as the one on allowing the inclusion of binary firmware 'blobs' (proprietary code needed by some drivers to interface with the hardware), a topic bound to raise the blood pressure of the freedom purists while appealing to the pragmatists. Or the one between Linus and the ARM developers about their somewhat insular position. This one worked out well, with more ARM-related code being moved into the main tree instead of hiding in its own corner.

Given that there are probably more devices running Linux on ARM than x86 these days (it's the processor of choice for embedded systems and smartphones), this was a sensible evolution - even if (or because) it was born in fire. Virtualisation is everywhere, from Linux Format readers using VirtualBox or KVM to test distros from the DVD, to massive data centres providing hosting on virtual machines. The kernel has support for the KVM (Kernel-based Virtual Machine) and Xen virtualisation systems.

There is also extensive support for proprietary systems, such as those supplied by VMware and Microsoft, so that Linux virtual machines can run just about anywhere. A virtual machine emulates a hardware computer, so anything that can reduce the amount of emulation required by the software, and place as little overhead between the virtualised hardware and the metal it runs on, is a good thing.

The KVM hypervisor extensions have done this for the CPU for some time now, and now the network has had the same treatment. All communication between a guest OS and either the host or another guest is done over a network connection, and virtual machines have traditionally emulated a real world network card, for the widest compatibility.

Linux recently introduced the vhost network driver, which does away with as much of the legacy hardware emulation as possible to give speeds of up to 10GB per second, as well as lower latency. It still works with standard drivers on the guest, so a specially modified virtual machine is unnecessary, as all the work is done in the kernel and set-up on the host.

Rise of the Androids

One company has managed to dramatically increase the number of people using Linux-powered systems, and we're not referring to Canonical. Google's Android OS has put Linux in the hands (literally) of millions of people who don't even know what an operating system is, let alone have heard of Linux.

Is that a good thing? That remains to be seen but it's a subject for plenty of discussion. What is also of concern to many is how Google participates in kernel development, or not. Many of the changes it makes are not fed back to the kernel. This is within the terms of the GPL, as long as the source is available, but many feel it's not in the spirit of sharing that forms the basis of the GPL.

Whether this is due to a reluctance on Google's part, or the way its development process works (it seems to prefer to feed back in large blocks at infrequent intervals rather than the far more common 'little and often' approach of open source) also remains to be seen.

It is, however, a huge, if largely invisible, take-up of Linux over the past couple of years. If only Linus had had the foresight to adopt a licence that required any device running Linux to display Tux somewhere.

Incidentally, Android may be the strongest argument for calling the desktop/server operating system GNU/Linux to differentiate it from other Linux-based operating systems, like Android, while at the same time showing that Linux without GNU is both workable and popular. While there haven't been many obvious, major leaps forward for the Linux kernel recently, this is simply a sign of its maturity, and a justification for the 3.0 label.

It will continue to evolve and progress in small steps (most of the time), which may seem uneventful. If you ever doubt the progress that Linux has made over recent years, grab a three-year-old copy of Linux Format, read what was considered new and interesting and then try installing the distro from the DVD to your new hardware.

Here's looking forward to the new features in the 4.0 kernel released in 2021, even if it is just a renumbered 3.99.