Categories
Innovation

Ricky, harping on about Web 2.0… again

After I wrote a post about Web 2.0 not being here yet, Ben wrote a piece on the subject, arguing that the cool new things that are happening on the Web these days warrant a version increment. I left a comment on that post in which, as unlikely as it sounds, I drew an analogy between the Web and turkey basters. Now I think I have a much better metaphor for the evolution of the Web than product version numbers.

Historians, both the professional and casual kinds, refer to the various stages of human history with names such as “Stone Age”, “Bronze Age”, “Industrial Age” and so on. These names reflect the progress of humankind as they evolved from using crude stone implements to refined materials to machinery. The space in which humankind has always lived, Earth, provides the raw resources, which, as time passed, humans learned to refine and compose to make more useful things. But historians do not feel compelled to refer to Earth by a new name (Earth 2.0) simply because one of its inhabitant species got a bit clever and started manipulating the raw materials available to them in new and interesting ways.

The Web is an abstract space analogous to the Earth. It contains resources – protocols, scripting languages, hypertext and so forth – which may be refined and composed to create novel things. The first stage of human existence within the Web was the “Hypertext Age”. URLs, Hypertext and the accompanying Hypertext Transfer Protocol (HTTP) were the raw materials of the Web in its infancy. These raw materials were provided by the almighty Creator (Tim Berners-Lee). The next stage of the Web was the “Dynamic Age”, whereby CGI, servlets and so on injected some modicum of motion to the creations of the Web’s inhabitants, just like the invention of the steam engine triggered a comprehensive mechanisation of industry. We now find ourselves squarely in the “Social Age” of the Web, in which humankind has fashioned some extraordinarily powerful tools within the Web and the physical world to fulfill their innate desire to socialise. The “Social Age” of the Web was brought about by the serendipitous co-occurrence of a generation of people who desire social connectivity on a new level, a range of technologies such as camera phones and iPods and cheap digital storage, and clever ways of putting together existing Web technologies (think Ajax). What circumstances will arise to give humankind the push into the next age of the Web? What will the next age of the Web look like? What experiences will be had during that age?

I won’t be holding my breath waiting for the world to take up this metaphor for the history of humankind’s existence in the Web. The “Social Age” of the Web is not as snappy and catchy as “Web 2.0”. But maybe we’ll tire of incrementing version numbers by Web 11.0!

Categories
Innovation

Schulze & Webb: Awesome

Before I go any further with this post, I want to thank Ben for imploring the readers of his blog to check out this presentation from some guys called Schulze & Webb. These days, you get pointers to so much stuff out there on the web, a lot of it interesting, but a lot of it only so-so. Then, occasionally, you’ll come across a gem, which truly was worth reading, and the presentation by Schulze & Webb, for me at least, is one of those gems. A word of advice if you do decide to read it, though: if you’re going to read it, read it right through as there’s a lot of good stuff in it.

I can relate to the presentation, titled The Hills are Alive with the Sound of Interaction Design, and its authors, Schulze & Webb, on a number of different levels. For starters, they use the example of football, specifically that magical goal Argentina scored against Serbia and Montenegro in the 2006 World Cup, to illustrate the concept that the means or the experience is more important to most people than the end result. In scoring that beautiful goal, Argentina strung together 24 passes before Cambiasso struck the ball into the back of the net. Football fans all over the world appreciate that goal because of the lead up to it, not the goal itself. This is also one of the reasons why football lovers can tolerate nil-all draws, and indeed why it can be truthfully said that some nil-all draws are more enjoyable than games in which five goals are scored: there’s so much more to the game than the goals. But football is only the most obvious example. The same can be said of other sports from cricket (the innings-long battle between batsman and bowler, rather than the fall of the wicket) to tennis (the rallies, rather than the rally-ending shot). Anyway, using football to illustrate a neat concept is a sure way to get me on side!

Their presentation also resonates with my recently written “About” page. They both speak of thresholds, boundaries and tipping points. They both talk about figuring out how to develop new things that harmonise with human experience and the human cognitive model (I love their bumptunes hack for the Powerbook; I wonder if the MacBook Pro has an accelerometer?).

Several months before submitting my Ph.D. thesis, I made the decision that I wanted to refocus my subsequent pervasive computing research more towards the user, or at the very least, to ensure that if I was going to be developing middleware to support pervasive computing applications, I would lobby hard to have some time and resources set aside to build decent, cool applications to exercise that middleware. It turns out I didn’t have to lobby that hard! But the point I’m trying to make here is that the Schulze & Webb presentation has provided a timely reminder of why I made that decision to think more about the user in the work that I do: it’s because in the research space I work in, that’s where the rubber hits the road. You can build middleware, context management systems and so forth, but in the end, it’s all in support of what the user wants to do, and it’s a fun challenge figuring out neat applications that people actually want to use because they’re a joy to use.

The challenge in my particular line of work is this: how do you create applications for emergency and disaster prediction, response and recovery which are “fun” to use? How do you design an application for the emergency services sector which creates an experience as pleasurable as watching Argentina’s second goal against Serbia & Montenegro in the World Cup? Is it even appropriate to create fun applications for an industry that, by definition, regularly deals with human tragedy? I hope the answer to the third question is a resounding “yes” if the applications help to save more lives than would otherwise be the case. Perhaps the word I’m looking for isn’t “fun” but “rewarding”. An application that makes its user feel rewarded for using it is a successful application because, presumably, the user will want to continue using it. An emergency worker feels rewarded if they are saving lives and making snap decisions that turn out to be good ones. Therefore, I think a good reformulation of my goal while I remain part of the SAFE project at NICTA is this: to develop rewarding applications (and supporting infrastructure) for the emergency services sector. This isn’t far off my official job description, but what it does is bring into sharp focus the importance of considering the users’ experiences as they interact with the application and system.

Thank you Ben. Thank you Schulze & Webb.

Categories
Innovation

Against academic hermits

In writing about the misconceptions of collaboration, I hadn’t expected that anyone would interpret my article as arguing against any kind of grouping of researchers, but it’s come to my attention that at least one reader has interpreted it that way, and a re-reading of my post tells me that’s a fair enough interpretation of what was written. I’m happy to take the blame for this, as my writing can be kind of loose at times. On the other hand, I’ve received some e-mails that indicate some of my other readers knew exactly what I was trying to get at, as they’d experienced some of the things I was talking about. Nevertheless, I’d like to clarify that I’m all for research teams! I hadn’t considered that research teams of the sort one might find within a single research organisation can be classified as a collaborative arrangement. To me, such a grouping is something more than a collaboration. Rather naively, I had not even thought about the possibility that academics can still get away with working in complete isolation, so I was somewhat stupidly relying on the reader to make some assumptions about what was written. When I spoke of loosely coupled interactions, I meant between two or more groups of academics at different institutions rather than between individual academics. When I said History is littered with hundreds of examples where this is the case, and very few in which close collaboration between teams of researchers yielded a scientific breakthrough I was talking about interactions between multiple teams rather than intra-team dynamics. What I’m arguing against are top-down directions that mandate collaboration between research teams from different organisations with little or no forethought. There’s pressure to form these kinds of collaborations because they attract funding either directly, or indirectly whereby they are used as a kind of metric that can be counted to attract future funding.

It has also occurred to me that if the article is read in a certain way, it could be seen as demeaning Australian CRCs. While I do believe it is getting harder for researchers who work in CRCs (and other government funded institutions) to focus on fundamental research, I certainly am not asking for the end of CRCs! Good grief! A certain CRC with which I have had the privilege to be associated will probably always be one of the best, coolest, most awesome places I will ever have worked at, and that’s precisely because of the people who worked there and the creative buzz one felt when interacting with those great people.

I think Kerry actually pin-pointed exactly the point I was trying to make:

Yes, but would not a formal collaboration of the correct range of skills in the one lab with a common goal have found it sooner?

The key point is “the one lab”. This is exactly right. But again, as soon as a cohesive unit is formed, to me it ceases to be what I think of as collaboration, and I think this is where Kerry and I were getting our wires crossed. When a research team is formed within a lab, the unit is no longer the researcher: it’s the research team. When collaboration does happen in this context, it is not between individual researchers but between entire research teams. And the fact that the researchers have gathered in the one lab indicates that they’ve all been drawn there of their own accord. It’s a bottom-up process, not a command-and-control one. Furthermore, collaboration between one research team and another one will happen of its own accord where the majority of individuals involved see the benefit. There’s no need for other (politically motivated) incentives.

For what it’s worth, I personally can’t even imagine being some kind of academic hermit working in isolation. I need to be part of a team so I can feed off the vibes and ideas of my co-workers. Hell, I’ve never even written a paper on my own (barring my thesis)! I hope that takes care of any misunderstandings. But knowing my luck, I’ve just dug a deeper hole! Perhaps you’ll be better off reading Kerry’s concise clarification.

Categories
Eco-philo-pol

Difference between social media and old media

In listing a few of the differences between new (social) media and old media, Scoble writes

The media above can’t be changed. A newspaper can’t magically change its stories, even if society decides something in them is incorrect. My blog can be updated for all readers nearly instantly if someone demonstrates that I was wrong on a post.

I wonder if he’s read Nineteen Eighty-Four.

Categories
Devonshire tea review

Room with Roses

The day before Valentine’s Day, Karen and I decided to have Devonshire Tea at Room with Roses in the heritage listed Brisbane Arcade. One of the main facets of dining at Room with Roses is the old world atmosphere. The tables, chairs and other décor all create the illusion that you’ve been transported back to the eighteen hundreds. There are roses on every table, and several nooks where you can escape the modern world.

Although they don’t do a set Devonshire Tea, their scones are delicious, if not traditional Devonshire scones. And they’re big! They had a thin layer of raspberries, and they melted in your mouth. Room with Roses provided a good approximation of clotted cream (might just have been Paul’s dollop cream, but it was much better than is often served with scones in this country), and some very nice rasperry jam. They served some kind of leaf tea in a dainty little pot that had a very old Commonwealth Bank insignia on it. Strictly speaking, the cups were for coffee rather than tea, but I suppose it’s hard to provide bone china tea cups in a popular restaurant. Karen enjoyed her hot chocolate also. The other diners seemed to be thoroughly enjoying their lunches, too.

The only slightly disappointing thing is the noise from the kitchen. However, our table was very close to the kitchen while the majority of tables are a good distance away. Given the kind of atmosphere that Room with Roses is trying to create (and for the most part it succeeds), it would be a nice touch if orders were taken at your table rather than having to go to the counter to order. The little number sign you have to take back to your table to display for the staff spoils the effect created by the surroundings ever so slightly.

Our Devonshire Tea cost $20.20 (two scones with jam and cream, an English breakfast tea and a hot chocolate). A booking is recommended.

Categories
Random observations

I’m a Mac, and I’m a PC

After a week of using the MacBook Pro that NICTA bought me (strictly for work purposes, of course), I gotta say, I love it! The MacBook Pro will be replacing my Windows desktop at work, and it’s also for taking back and forth between home and work and for taking to conferences etc. I’ve been working on a publishing and reviewing system, and up until now, although it’s NICTA’s IP, it was all being done on my own Linux box at home – not the optimal state of affairs. The sub-optimal nature of this arrangement was made crystal clear when my Linux box started to fail (it’s quite old). So, I asked for a laptop such that I could work on the SAFE project stuff at work as well the publishing and reviewing stuff at home. Somewhat to my surprise, NICTA duly obliged. At least now if something goes wrong with the laptop, all the code is on a NICTA machine and hopefully I won’t be culpable. Of course, it’s much easier to lose a laptop or to have it stolen than a desktop…

The loser out of all this is Linux. I bought a Dell to replace my home machine, and it’s got Windows Media Centre (with free upgrade to Vista) and Office on it. Karen and I need at least one up to date copy of Office between us. The Dell machine is very nice, but I’m a bit disappointed that I seem to have settled into using Windows at home, a day I thought would never come. I’m not a fan of dual booting – I’m generally too lazy for that kind of thing. To my chagrin, in my current job I really do need to use Office products quite frequently, and I’ve never been happy with any of the Open Source Office replacements. I’m still thinking this is only a temporary backward step, and that sooner or later I’ll be back on Linux, or I could even run Mac OS X on the Dell; now there’s an idea!

But one must give Microsoft credit where credit is due. My MacBook Pro has MS Office for the Mac installed on it and I’m using Entourage for mail. So far Entourage has left me with mostly positive impressions. I like it a lot. The Project Centre inside Entourage makes it easier to implement GTD, and it’s generally nicer to use than Outlook, and in my view it’s even nicer than Thunderbird. I haven’t tried Apple Mail, but my feeling is that those Mac users who don’t have a militant aversion to Microsoft products use Entourage in preference to Mail, iCal etc. I only wish that you could customise some of the properties of the mail folders in Entourage, like telling it to display a count of all the messages in the folder rather than just the unread ones. This is one useful feature that Outlook has which other mail clients don’t seem to support. I’ve been using this feature on my PC at work to help me implement my GTD system, and it works very well.

Oh, and here are my favourite Mac ads:

Actually all the ads are great.

Categories
Random observations

The changes continue

I’m continuing to fiddle with this weblog in whatever spare time I have. Just think, in the cumulative time I’ve spent blogging and stuffing around with this website, I probably could have built a small house. Anyway, I’ve added an “About” page and a “Disclaimer” page, the latter being about as useful as a lawyer is in hell, but it lends me some comfort nevertheless. Additionally, I’ve changed the tagline for the weblog. The “About” page might go some way towards explaining what it means, but I wouldn’t count on it. This weblog may be completely losing the plot; then again, it might just be getting interesting. Such uncertainties are the very nature of The Thin Line…

Categories
Random observations

Unself-censored

I have removed password protection from one of my earlier posts. Enjoy the rant.

Categories
Random observations

Web 2.0 not here yet

Now that this weblog has clearly entered the realm of Web 2.0, I will claim, contradictorily, that Web 2.0 has not actually arrived yet (not outside of computer science laboratories, anyway), and that we are still using the same old Web that always existed.

Web 2.0, it proponents claim, is about linking people while Web 1.0 was about “linking computers and making information available“. The Web always held the promise of connecting people. I contend that the Web has simply matured, and that its current vitality is more a result of cheap storage, cameras and so on which are finally enabling Web-based social networks on a massive scale: everyone’s participating because, finally, everyone can! But this phenomenon isn’t anything worthy of the Web 2.0 title. What we’ve seen is a powerful technology waiting for everything else to catch up before its true potential could be realised. And importantly, it required a generation of users who wanted to be connected in the manner provided by the MySpaces, YouTubes, LinkedIns and so forth, and who understood the benefits of being connected in such a fashion. In short, what we’re seeing is not Web 2.0, but Society 2.0; we’re witnessing the ability of technology, led by the Web, to link people together in new and interesting ways. The Web was just waiting for the right conditions to spring into brilliant life, like a dormant Wattle seed waiting for rain before sprouting and then bursting into full bloom.

Let me say that again: what some are calling Web 2.0 is nothing more than the result of a propitious set of external circumstances rather than a re-engineering or evolution of the Web itself. Cheap storage means cheap hosting. And economies of scale mean that specialist hosting sites can offer extremely cheap of even free space to host a web site or blog, which has the effect of creating hosted communities such as MySpace and Blogspot. Furthermore, digital photography is cheap and ubiquitous. These days it’s difficult to buy a mobile phone without a camera in it. Many mobile phones even offer video capture. It’s a piece of cake to snap photos and shoot videos and upload them to the Web. But nothing has really changed technology-wise in the Web itself. Even things like Ajax are simply a clever way of bringing together existing technologies (Javascript and XML in the case of Ajax) to do something interesting. Today’s Web is simply reaching somewhere near the potential that it always had.

The true Web 2.0 will arrive when the fundamental technologies underlying the current Web change. Perhaps HTTP will be augmented with a true asynchronous method which enables clients to subscribe to receive events without the need for polling, thereby potentially reducing network traffic and the load placed on Web servers. Also, the Web is still relatively “dumb”. Semantics will play a bigger role in Web 2.0. This will enable richer kinds of search and allow machines to do what they’re good at, which is information processing and certain kinds of reasoning, tasks at which they already far outperform humans. It will be a long time before machines can process natural language, interpret photos and describe music the way humans can. So, in the mean time, humans will provide structured descriptions about content and services which will allow machines to draw links and relationships between various pieces of information and other content, and infer implicit information from information that is explicity asserted by someone. For example, imagine air pollution in the city of Brisbane becomes a problem and the city council institutes a law that states each person may own only one vehicle with four or more cylinders. The knowledge base (i.e., the Web) contains the fact that Bill owns two vehicles. In response to a query asking which people own a vehicle with three or fewer cylinders, we can return Bill (assuming he is a law-abiding citizen) even if the Web contains no explicit facts about the number of cylinders each of his vehicles has. It will enable more automation for tasks that human users should not have to do. Here’s a really simple example that requires no reasoning, just data structured in some standard way. I’ve created accounts on numerous Web sites. Every time I create an account on some site, it asks me for pretty much the same information. Why can’t I simply give the website a URI which it can then use to find out all the information about me that it needs? Then, imagine the different kinds of mash-ups you could create by using this web of data and some simple inference tools; new mash-ups could take advantage of explicit knowledge and inferred knowledge of the kind given in the example above. The Web Ontology Language (or OWL) is being pushed as the means for achieving this “smarter” Web (my own feeling is that, while OWL is very powerful, it can be difficult to express things from any given domain of discourse using OWL, which may be a source of friction to its uptake in the future).

I want to make it clear: I love where the Web is going. I think most of the sites which have been labelled as being Web 2.0 are exciting and innovative, and I hope there’s more to come. My issue is a rather trivial gripe about new names being given to old technology.