Publishing Lessons from Web 2.0 Expo

Last week I was in New York for the city’s first Web 2.0 Expo. I was a member of the program committee and one of our goals was to make it a uniquely New York event. This meant a real focus on measurable outcomes and integrating Web 2.0 principles into established business, in contrast with the more startup-friendly atmosphere of the San Francisco event. The fact that the conference ran during the week of the Wall Street meltdown only reinforced the need for pragmatism in tough economic times.

Naturally I was interested in applying what I learned to the publishing world. If you couldn’t make it to the event, here were my big take-aways:

Web 2.0 is social software
Consultant Dion Hinchcliffe’s tutorial on the Web 2.0 landscape summed it up best: Web 2.0 means software that gets better the more people use it. This is radically different from traditional software development, which gets better only when programmers add new features. (In the case of Microsoft Word, it generally gets worse.)

The best example in the publishing space is LibraryThing, which has a more accurate book catalog than Amazon.com, but also content found nowhere else. My favorites are the Legacy Libraries, which collect works associated with famous dead people. The Legacy Library project illustrates a related principle of Web 2.0: encourage unintended uses. LibraryThing was designed for individuals to catalog and rate their own books, but this user-driven initiative has added tremendous unexpected value.

Thinking outside the box
That is, outside of a single computer (geeks like to call them “boxes”). More Web applications are either being built on top of other services, or make use of so-called cloud computing. Amazon, Google and other providers now offer a wealth of ready-made software and infinite computing power to allow companies to leapfrog over problems of cost and scaling.

Only a few years ago when I was approached by a publisher to start a project, we would begin at the beginning: purchasing a computer, selecting a service provider, writing some HTML, crunching some data. With services like Amazon’s Elastic Compute Cloud, there’s no longer any need to buy hardware: instantly an application can be deployed on one computer, or a thousand, at very low cost. This makes experimentation much more feasible: if no users come to a new product, no expensive hardware investment has been wasted. If it’s successful, a few keystrokes can add 10X the computing power.

Cloud computing has also created tremendous benefit for offline processing tasks, as shown by The New York Times when converting their digitized archive for use on the Web.

It’s not just about people, it’s about data
Finally, Toby Segaran’s talk on “The Ecosystem of Corporate and Social Data” reminded me how much value publishers have. Toby explored clever ways of finding usually-expensive data for free (for example, rather than paying for Yellow Page listings of restaurants, he scraped the New York City health department Web site, which includes ratings of every food-service facility).

Diving deeper, he emphasized how much value can be added to digital services if they are already full of content. Wikipedia came preloaded with a public domain encyclopedia, as it’s much easier to correct or update old content than to enter it wholesale. The more of your content that users can find and interact with (for example, by providing an extensive full-content backlist), the more engaged they’ll be.

Speaker presentations for the conference are available here: Web 2.0 NYC presentations.

tags: , , , , ,