Categories
Innovation

Privacy in social software

If I was to enter your address and other personal details into an online application like Plaxo (an address book/calendar), and those details were leaked (or sold, for that matter – not that Plaxo would do that), how pissed would you be. Would you forgive me for storing your details in some third party database? If somebody used those leaked details to impersonate you, and they were caught, would I be liable for having entered your details into my online address book in the first place without getting your permission? I wouldn’t think twice about putting someone’s details into an electronic address book that resides on my computer or using an old-fashioned paper-based address book. But an online address book service could potentially store millions of address book entries put there by thousands of users, and it therefore becomes an attractive and worthwhile target for criminals.

Categories
Innovation

RUNES project: Fire in Tunnel scenario

The following video gives a taste of what ubiquitous computing researchers around the world are working towards. This video has particular relevance for the SAFE project, because it deals with an emergency scenario. It’s a professionally made video, and very interesting to watch. One hopes RUNES didn’t blow their whole budget on this! (To be fair, they’ve produced a bunch of code that can be downloaded from their web site.)

Categories
Innovation

On Surveillance

I don’t have a problem with someone filming another person on their mobile phone in a public area where it’s obvious that many people are watching, so why do I have a problem with surveillance cameras in workplaces and some other locations? Isn’t that inconsistent? What’s the difference? I think the answer is that the first scenario does not break social protocols. If you’re speeding down the highway, you obviously know that there are going to be tens, if not hundreds, of other people watching you. That somebody might capture your flaunting of the law on camera does not change the social protocol. That is, you know you are being watched by human eyes. The same goes for taking recordings during meetings. There are other people in the meeting and they’re going to hear what you say anyway. In the second scenario, the camera is always turned on, even when there are no other people around. This is the important difference. It is easy to forget that your actions might be caught on camera, even if the camera is in plain sight and there are notices everywhere warning that there are surveillance cameras in use. So a woman might adjust her bra, or a man might pick his nose, not realising in the instant that it’s all being caught on camera. You might begin to sing, forgetting that it’s being recorded. Despite technology, we are still very social animals. That means we think in terms of the other people who are physically around us, overlooking the fact that technology enables others to be present in spite of their separation in space and time.

At the QRL labs of NICTA, surveillance cameras have been installed for research purposes in the hallways. I’ve grown kind of used to them. But I do completely forget about them until I get up from my desk and see them hanging out of the ceiling in the hallway. If one were installed in my office where it could record me at my desk, I think it would similarly escape my attention until I turned around and looked at it. I wonder what kind of unconscious embarrassing behaviours the camera would record if it were trained on my desk?

Categories
Innovation

PerCom 2007: Trip Report

I’ve posted a trip report for PerCom 2007 over on RickyRobinson.id.au. Some of you might be interested.

Categories
Innovation

Ricky, harping on about Web 2.0… again

After I wrote a post about Web 2.0 not being here yet, Ben wrote a piece on the subject, arguing that the cool new things that are happening on the Web these days warrant a version increment. I left a comment on that post in which, as unlikely as it sounds, I drew an analogy between the Web and turkey basters. Now I think I have a much better metaphor for the evolution of the Web than product version numbers.

Historians, both the professional and casual kinds, refer to the various stages of human history with names such as “Stone Age”, “Bronze Age”, “Industrial Age” and so on. These names reflect the progress of humankind as they evolved from using crude stone implements to refined materials to machinery. The space in which humankind has always lived, Earth, provides the raw resources, which, as time passed, humans learned to refine and compose to make more useful things. But historians do not feel compelled to refer to Earth by a new name (Earth 2.0) simply because one of its inhabitant species got a bit clever and started manipulating the raw materials available to them in new and interesting ways.

The Web is an abstract space analogous to the Earth. It contains resources – protocols, scripting languages, hypertext and so forth – which may be refined and composed to create novel things. The first stage of human existence within the Web was the “Hypertext Age”. URLs, Hypertext and the accompanying Hypertext Transfer Protocol (HTTP) were the raw materials of the Web in its infancy. These raw materials were provided by the almighty Creator (Tim Berners-Lee). The next stage of the Web was the “Dynamic Age”, whereby CGI, servlets and so on injected some modicum of motion to the creations of the Web’s inhabitants, just like the invention of the steam engine triggered a comprehensive mechanisation of industry. We now find ourselves squarely in the “Social Age” of the Web, in which humankind has fashioned some extraordinarily powerful tools within the Web and the physical world to fulfill their innate desire to socialise. The “Social Age” of the Web was brought about by the serendipitous co-occurrence of a generation of people who desire social connectivity on a new level, a range of technologies such as camera phones and iPods and cheap digital storage, and clever ways of putting together existing Web technologies (think Ajax). What circumstances will arise to give humankind the push into the next age of the Web? What will the next age of the Web look like? What experiences will be had during that age?

I won’t be holding my breath waiting for the world to take up this metaphor for the history of humankind’s existence in the Web. The “Social Age” of the Web is not as snappy and catchy as “Web 2.0”. But maybe we’ll tire of incrementing version numbers by Web 11.0!

Categories
Innovation

Schulze & Webb: Awesome

Before I go any further with this post, I want to thank Ben for imploring the readers of his blog to check out this presentation from some guys called Schulze & Webb. These days, you get pointers to so much stuff out there on the web, a lot of it interesting, but a lot of it only so-so. Then, occasionally, you’ll come across a gem, which truly was worth reading, and the presentation by Schulze & Webb, for me at least, is one of those gems. A word of advice if you do decide to read it, though: if you’re going to read it, read it right through as there’s a lot of good stuff in it.

I can relate to the presentation, titled The Hills are Alive with the Sound of Interaction Design, and its authors, Schulze & Webb, on a number of different levels. For starters, they use the example of football, specifically that magical goal Argentina scored against Serbia and Montenegro in the 2006 World Cup, to illustrate the concept that the means or the experience is more important to most people than the end result. In scoring that beautiful goal, Argentina strung together 24 passes before Cambiasso struck the ball into the back of the net. Football fans all over the world appreciate that goal because of the lead up to it, not the goal itself. This is also one of the reasons why football lovers can tolerate nil-all draws, and indeed why it can be truthfully said that some nil-all draws are more enjoyable than games in which five goals are scored: there’s so much more to the game than the goals. But football is only the most obvious example. The same can be said of other sports from cricket (the innings-long battle between batsman and bowler, rather than the fall of the wicket) to tennis (the rallies, rather than the rally-ending shot). Anyway, using football to illustrate a neat concept is a sure way to get me on side!

Their presentation also resonates with my recently written “About” page. They both speak of thresholds, boundaries and tipping points. They both talk about figuring out how to develop new things that harmonise with human experience and the human cognitive model (I love their bumptunes hack for the Powerbook; I wonder if the MacBook Pro has an accelerometer?).

Several months before submitting my Ph.D. thesis, I made the decision that I wanted to refocus my subsequent pervasive computing research more towards the user, or at the very least, to ensure that if I was going to be developing middleware to support pervasive computing applications, I would lobby hard to have some time and resources set aside to build decent, cool applications to exercise that middleware. It turns out I didn’t have to lobby that hard! But the point I’m trying to make here is that the Schulze & Webb presentation has provided a timely reminder of why I made that decision to think more about the user in the work that I do: it’s because in the research space I work in, that’s where the rubber hits the road. You can build middleware, context management systems and so forth, but in the end, it’s all in support of what the user wants to do, and it’s a fun challenge figuring out neat applications that people actually want to use because they’re a joy to use.

The challenge in my particular line of work is this: how do you create applications for emergency and disaster prediction, response and recovery which are “fun” to use? How do you design an application for the emergency services sector which creates an experience as pleasurable as watching Argentina’s second goal against Serbia & Montenegro in the World Cup? Is it even appropriate to create fun applications for an industry that, by definition, regularly deals with human tragedy? I hope the answer to the third question is a resounding “yes” if the applications help to save more lives than would otherwise be the case. Perhaps the word I’m looking for isn’t “fun” but “rewarding”. An application that makes its user feel rewarded for using it is a successful application because, presumably, the user will want to continue using it. An emergency worker feels rewarded if they are saving lives and making snap decisions that turn out to be good ones. Therefore, I think a good reformulation of my goal while I remain part of the SAFE project at NICTA is this: to develop rewarding applications (and supporting infrastructure) for the emergency services sector. This isn’t far off my official job description, but what it does is bring into sharp focus the importance of considering the users’ experiences as they interact with the application and system.

Thank you Ben. Thank you Schulze & Webb.

Categories
Innovation

Against academic hermits

In writing about the misconceptions of collaboration, I hadn’t expected that anyone would interpret my article as arguing against any kind of grouping of researchers, but it’s come to my attention that at least one reader has interpreted it that way, and a re-reading of my post tells me that’s a fair enough interpretation of what was written. I’m happy to take the blame for this, as my writing can be kind of loose at times. On the other hand, I’ve received some e-mails that indicate some of my other readers knew exactly what I was trying to get at, as they’d experienced some of the things I was talking about. Nevertheless, I’d like to clarify that I’m all for research teams! I hadn’t considered that research teams of the sort one might find within a single research organisation can be classified as a collaborative arrangement. To me, such a grouping is something more than a collaboration. Rather naively, I had not even thought about the possibility that academics can still get away with working in complete isolation, so I was somewhat stupidly relying on the reader to make some assumptions about what was written. When I spoke of loosely coupled interactions, I meant between two or more groups of academics at different institutions rather than between individual academics. When I said History is littered with hundreds of examples where this is the case, and very few in which close collaboration between teams of researchers yielded a scientific breakthrough I was talking about interactions between multiple teams rather than intra-team dynamics. What I’m arguing against are top-down directions that mandate collaboration between research teams from different organisations with little or no forethought. There’s pressure to form these kinds of collaborations because they attract funding either directly, or indirectly whereby they are used as a kind of metric that can be counted to attract future funding.

It has also occurred to me that if the article is read in a certain way, it could be seen as demeaning Australian CRCs. While I do believe it is getting harder for researchers who work in CRCs (and other government funded institutions) to focus on fundamental research, I certainly am not asking for the end of CRCs! Good grief! A certain CRC with which I have had the privilege to be associated will probably always be one of the best, coolest, most awesome places I will ever have worked at, and that’s precisely because of the people who worked there and the creative buzz one felt when interacting with those great people.

I think Kerry actually pin-pointed exactly the point I was trying to make:

Yes, but would not a formal collaboration of the correct range of skills in the one lab with a common goal have found it sooner?

The key point is “the one lab”. This is exactly right. But again, as soon as a cohesive unit is formed, to me it ceases to be what I think of as collaboration, and I think this is where Kerry and I were getting our wires crossed. When a research team is formed within a lab, the unit is no longer the researcher: it’s the research team. When collaboration does happen in this context, it is not between individual researchers but between entire research teams. And the fact that the researchers have gathered in the one lab indicates that they’ve all been drawn there of their own accord. It’s a bottom-up process, not a command-and-control one. Furthermore, collaboration between one research team and another one will happen of its own accord where the majority of individuals involved see the benefit. There’s no need for other (politically motivated) incentives.

For what it’s worth, I personally can’t even imagine being some kind of academic hermit working in isolation. I need to be part of a team so I can feed off the vibes and ideas of my co-workers. Hell, I’ve never even written a paper on my own (barring my thesis)! I hope that takes care of any misunderstandings. But knowing my luck, I’ve just dug a deeper hole! Perhaps you’ll be better off reading Kerry’s concise clarification.

Categories
Innovation

My two bob’s worth on the Don Norman simplicity debate

Don Norman, respected usability guru, wrote an article on the demise of simplicity as a selling point, and it’s caused reverberations all around the world. In fact, his article has been so controversial that he’s found it necessary to write a clarifying addendum for the essay (added to the bottom of the article), fearing that many of his readers interpreted his article as concluding that simplicity should no longer be a design goal. Norman’s point is that a product with a greater number of features is more appealing than a similar product with fewer features. The “more complicated” product is therefore more likely to sell. In other words, feature creep is driven by the knowledge that consumers will be suckered in to paying for a product that looks more complicated, even though, in many cases, they might complain about the difficulty of using the product when they get home.

I think there’s a difference between giving a user too many choices and too many features. Confusion and frustration arises when the user is presented with an array of subtly different choices. Joel Spolsky provides an excellent example: the Windows Vista shutdown menu. Windows Vista provides the user with umpteen slightly different ways of shutting down the computer. Why? On the other hand, providing lots of features that do different things need not result in frustrating the user, because, well, they are for accomplishing distinct tasks, and the user can clearly separate them in their mind. Take those Japanese toilets, for instance. These toilets have an integrated bidet, dryer, seat warmer, massage options, automatic flushing and so on and so forth. The existence of these features does not mean that the toilet isn’t simple to use, per se. If however, each of those features had a confusing list of subtly different settings, then that could be a problem!

Norman’s essay could have been made easier to read and resulted in less confusion if it had been written more clearly and more carefully. The following is just the most confusing of a number of errors that can be found in his article:

Notice the question: “pay more money for a washing machine with less controls.” An early reviewer of this paper flagged the sentence as an error: “Didn’t you mean ‘more money’?” the reviewer asked?

But it already says “more money”. Somehow Norman and his reviewer have conspired to introduce an error that is similar to the one they were seeking to avoid. If that’s not irony, I don’t know what is.