Web 2.0: Simple division?

Email.Email weblog link
Blog this.Blog this

Simon St. Laurent
Jan. 23, 2006 08:00 AM

Atom feed for this author. RSS 1.0 feed for this author. RSS 2.0 feed for this author.

There's been a lot of talk about the revolutionary change Web 2.0 promises, and it's time to look at the architecture that's leading to that change: a greater split between client and server logic.

Lots of people have pondered what Web 2.0 means. Tim O'Reilly posted an extensive "What is Web 2.0?" article that goes well beyond what I'm seeing as the root cause of the shift. A Google search on "What is Web 2.0?" brings up all kinds of answers - and questions.

My personal sense is that all of these answers and questions reflect a shift in the architecture of web applications. There's no new technology involved - most of these components were available even before the Web 1.0 bubble burst. The shift is in the way that we are applying basic technologies like HTML, JavaScript, and XML to long-existing problems.

In Web 0.x, Web sites were collections of static files, all available for viewing with a mild amount of interactivity. In Web 1.0, those pages were supplemented with frameworks that generated pages, allowing the use of more robust server-side tools and making it possible for sites to become larger and more interactive than before, connected with existing systems.

Now, with Web 2.0, those frameworks aren't just generating web pages. More and more, the data that is available in their back ends is becoming directly accessible without an HTML intermediary. You can get data from Google or Amazon without having to get the HTML they normally generate along with it.

Sounds small, right? It's not a huge change, nor is it new technology. Web 2.0 skeptics are right that there isn't a major technology shift here, and that's a good signal to be on the alert for hype.

It's not small, though. Web applications that offer raw data in addition to or instead of HTML-formatted data create huge new possibilities. This is why we've seen mashups spreading across the landscape, combining data from multiple servers in a new client application. It's also why aggregation is becoming easier and more popular, as this basic application skips the screen-scraping (and its inherent legal problems), allowing new services to build on old ones.

One especially exciting part of this - like many web technologies - is that you don't have to be a meganational supersized company to work in it. You can create clients that work with this information for web browsers (using Ajax, Flash, or other approaches), or even for programs in more traditional environments. You can also create servers which share data this way, or servers which call other servers to use their information.

We're really at the beginning of this, and there are lots of problems, technical and cultural, to address. We had a bit of a false start here with SOAP-based Web Services, which have proven to be mostly Enterprise Services, retreating behind the firewall. It's not clear what to call this new model, nor is it always clear what tools to use to build it. There are lots of ways to build programming interfaces on the Web, and while my favorite is REST-based XML, a variety of text-over-HTTP options are out there.

There will be all kinds of business and technical shifts as a result of this architectural opening. For those, I think Tim's description of Web 2.0 is probably the best crystal ball to visit today.

Simon St. Laurent is an associate book editor at O'Reilly Media, Inc..