Without taking risks, you don't develop

Morrison
If you want to improve, you must first get used to failing

I've just returned from my first snowboarding holiday in eight years. After such a long period off the slopes, I really didn't know whether I'd need to completely re-learn - whether I'd find the whole ordeal too terrifying, or be able to just jump on my board and pretend those eight years had been eight days. In the end, is was the latter.

I was able to strap myself in on the first morning, point the board down the hill and reach the bottom of the slope without dislocating my left knee or pushing any stray skiers into a crevasse. It seems that while there's isn't much similarity between riding a bike and riding down an icy slope on a plank, neither will let you forget how to ride after you've learnt.

As a result, I had a fantastic week exploring new territory and rebuilding my old confidence. Now, I think there's a link between open source software and the way you learn to ride a snowboard.

It's the old idea that without taking risks, you don't develop. And I don't mean large risks like jumping off a cliff or taking a straight route to the bottom of the piste. I mean small, considered risks, like going a little faster, turning a little tighter or moving on to an unfamiliar track.

Even if you fall over, which you will, you need to take your ability beyond its current limits to understand better how to improve. It's only after you've ridden faster, steeper or harder that you can return to those old slopes with renewed confidence, ability and insight. It's how we all get better, and it doesn't happen if you don't push yourself.

Despite a lack of mountains and raclette, I think Linux and free software are somewhat like snowboarding, and it's the cycle of adventure, failure and improvement that is core to their success. Release early, release often is a risky idea, popularised by Eric S Raymond in his must-read essay, The Cathedral and the Bazaar.

Experimentation the key

There are many different ways of interpreting this, from the rapid fire of kernel releases to the rolling cycle of distributions like Arch, but at its lowest level, it's an idea that should push the thousands of projects that have already climbed their first slope - solving a specific problem - on to publishing at least one release.

There are two such projects of my own I can think of. One was an editor for a difficult to program synthesizer (an Alesis Micron), and the other was a software controller. I've probably spent hundreds of hours on them, but because neither is fit for use (and never likely to be because I've moved on to other things), they'll never see the light of day.

They'll never be shared because they're barely functional, contain terrible coding shortcuts and have no support. Had I released the code as soon as I thought the project was mature enough to compile easily, regardless of the overall quality, there's a more than zero chance that one of those projects - or at least part of it - would live on.

Decoding the binary data for the synthesizer took a great deal of time, for example, and that information would save another developer having to do exactly the same thing.

The disadvantages

But there are two massive disadvantages with this approach. The first, and most problematic, is that because there's no quality control, most projects are going to be adding to the interminable background noise of the internet. You only have to take a look at the hundreds of thousands of projects languishing on sites like SourceForge.net to see what I mean.

The second problem is that, in making a release like this, you're opening yourself up to criticism and ridicule because it's never going to be of a standard you'd normally set yourself. This means it's important that users and developers recognise the difference between a final version and an 'in-progress' release. This is a subtle problem for free software, because there are so many different ways you can go about tackling the same issue.

The in-progress version of the kernel can be downloaded at any time from a Git repository, for example, while releases between versions 1.0 and 2.6.x used odd-numbered minor versions to denote a developmental stage. This method was superseded when Linux adopted a more 'release often' approach, thereby negating the need for developmental versions all together.

But this release strategy only works here because so many people are working with the kernel every day. It doesn't work for small projects, or even some large ones like Gnome. The major release of Gnome 3.0, like KDE 4 before it, took the community in a direction that it wasn't prepared for, making a major update feel like a very risky idea-in-progress.

Yet despite these difficulties and the problems they cause, free software has ultimately proven that the methodology of airing your ideas in public, taking risks and going off piste is the best way to innovate. It's something that proprietary business can't compete with. After all, if you were worried about getting hurt, looking like an idiot or leaving your comfort zone, you'd never snowboard either.