Contrary to popular opinion, the phrase 'Web 2.0' was not coined by Tim O'Reilly and did not, originally, refer to web applications like Facebook and Twitter that enable Muggles, er, non-web-professionals, to share information online.

More than a decade ago, Darcy DiNucci predicted that:

"The Web we know now, which loads into a browser window in essentially static screenfulls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfulls of text and graphics but as a transport mechanism, the ether through which interactivity happens. It will [...] appear on your computer screen, [...] on your TV set [...] your car dashboard [...] your cell phone [...] hand-held game machines [...] maybe even your microwave oven." – DiNucci, D. (1999) "Fragmented Future," Print 53

This first use of the phrase 'Web 2.0' was a vision of what we now call ubiquitous computing and what marketers call convergence. As with all futurist visions considered in the cold light of hindsight, some of DiNucci's language sounds naïve and a few of her predictions fall short.

Certainly "your TV set" hasn't become the hippest place for hot Web 2.0 action in most countries, unless you consider downloading episodes of The Real Housewives of New Jersey the height of web-based interactivity.

But DiNucci looks a positive oracle where her "cell phone" prediction is concerned, because the ubiquity of high-resolution CSS3- and HTML5- capable smartphones powered by WebKit is bringing real, empowering change to our medium.

Convergence

Cheap, complex devices such as the iPhone and the Droid have come along at precisely the moment when HTML5, CSS3 and web fonts are ready for action; when standards-based web development is no longer relegated to the fringe; and when web designers, no longer content to merely decorate screens, are crafting provocative, multi-platform experiences. Is this the dawn of a newer, more mature, more ubiquitous web?

In a word, yes. After the hype of the dot com boom and bust, the hard sell around blogging, the endless flogging of social media and other widely heralded game-changers, we who practice web design find ourselves at a genuine inflection point.

With browsers and devices of great reliability supporting mature standards, with a seemingly bottomless demand for apps powered by these standards, and with consumers queueing in the rain to possess the newest complex device before their neighbour gets hold of it, the era of mature standards-based design is upon us. The web we grew up with is as obsolete as the concept album. (Kids, ask your parents.)

In yesterday's web, each corporate site stood alone, a self-contained object like The Beatles' Sgt. Pepper's Lonely Hearts Club Band album. Today, a corporate site is only as good as the third-party APIs and links it facilitates. Yesterday's websites were optimised for Internet Explorer Version X or Netscape Navigator Version Y; today, site owners live and die on the addictiveness and ease of use of their mobile site and apps.

Time was, the adjectives 'well-designed' or 'rich' were code for 'created in Flash', but after more than a decade of standards-based design and advocacy, and with the advent of web fonts, we know that (X)HTML, CSS and JavaScript can power web experiences of extraordinary beauty – and are even more likely than Flash to be the driving force behind the richest web applications and experiences.

Wildly successful sites such as Flickr, Twitter and Facebook offer genuinely portable social experiences, on and off the desktop. You don't even have to go to Facebook or Twitter to experience Facebook and Twitter content, or to share third-party web content with your Twitter and Facebook friends.

Training wheels off

As our knowledge of standards-based design has matured (ironically helped, in large part, by the five years between IE6 and IE7, which gave us time to figure out bugs and workarounds and teach them to even our most standards-averse colleagues), most of us have also become more and more interested in user experience and content strategy – a discipline that's been around for ages but is only now gaining the attention it deserves, thanks in part to the evangelism of Kristina Halvorson.

We've become user-focused and best practices-aware at the very moment that emerging standards offer us tremendous new power, our new browsers (including IE9) give us the chance to explore that power, and our best browsers power our most popular and powerful phones. Talk about convergence!

And with consumers buying two smartphones for every desktop computer they purchase, the demands, challenges and opportunities of the mobile space are reshaping our assumptions about design and user behaviour.

So let's consider this moment of change and sweep away the misconceptions and half-truths that keep some of us from embracing the opportunity before us. For openers, let's check out CSS3. CSS3 for you and me CSS3 is the W3C's latest, ablest and most complex version of the web's standard language for visual design.

CSS3 media queries are an empowering technology behind 'Responsive Web Design', an emerging best practice and key component of the mature, multi-platform web. Just as important is what CSS3 isn't.

CSS3 isn't a monolithic specification (like CSS 2.1) that must be implemented in its entirety before people from nice homes consider it safe to use. Learning from browser implementation struggles of years past, the W3C wisely opted to design CSS3 as a series of modules, which can be worked out in browsers piece by piece.

iPhone in css3

PURE CSS3: An iPhone made with no images (just pure CSS3)

If prior W3C specs are like a full-blown website redesign that has to be perfect on the day of the launch, CSS3 is more like a series of gentle site updates, rolled out over months and years to give users time to get used to them – and designer/developers time to get them right.

This means you don't have to read and memorise the entire CSS3 spec at once, and browser makers don't have to try to implement every bit of it immediately – which is how browser makers have got into trouble in the past, and how we used to get stuck with half-baked CSS implementations for years at a time. Think back to the old IE box model that was more intuitive than the actual CSS1 box model, but wrong.

Designers had to hack around it for nearly a decade, using Tantek Çelik's famous Box Model Hack and various other workarounds. Those who refused to use hacks on principle often beat IE's box model into shape by bloating their markup with otherwise needless containing divs.

Fortunately, we won't be stuck with similar problems as browser engineers tackle the new CSS specs, because the modularity of CSS3 enables browser geeks to sweat the details, one feature at a time. Thus we get well thought-out, reasonably consistent feature implementations in the latest Safari, Firefox and Opera.

And since more than a vanguard of web designers is experimenting with CSS3, the browser makers get instant feedback about what works and doesn't. In some cases this feedback can be rolled back into the W3C spec before it's finalised, creating the kind of feedback loop we never had before. It's a whole new web of shared understandings, out in the open, where anyone with a good idea can see and contribute.