Johnny Lee from CMU’s HCI Institute has done some pretty cool things with the Wiimote. His Ph.D. project has also yielded some way cool stuff. Here’s just a few of the things he’s done on his own and with his colleagues.
Truly inspiring.
About my research work.
Johnny Lee from CMU’s HCI Institute has done some pretty cool things with the Wiimote. His Ph.D. project has also yielded some way cool stuff. Here’s just a few of the things he’s done on his own and with his colleagues.
Truly inspiring.
Ben delicioused me a link to an interesting paper called “Bowl: token-based media for children“. It describes a media player that is controlled by placing various objects (tokens) into a bowl. The idea was to create a control interface that is easy for children to use and which establishes links between particular physical objects and digital media. Aside from being a really cool means for interacting with a media player, it would have to be one of the neatest uses of RFID that I’ve come across so far. The bowl (or rather the platform that the bowl sits on) is augmented with an RFID reader. The various objects are augmented with RFID tags. When an object is placed in the bowl, an associated piece of media plays on the screen. For example, when a Mickey Mouse doll is put into the bowl, a Mickey Mouse cartoon plays. In theory, various combinations of objects might also have meaning. The system might be configured so that if Mickey Mouse and Donald Duck are placed in the bowl, a cartoon featuring both these characters starts playing. The system becomes very social and conversational when homemade objects are augmented with RFID and linked to, say, home video clips or family photos, as demonstrated by the experiment reported in the paper.
I wonder what sorts of casual, natural interactions such as those induced by Bowl might make sense in the domain I’m working in? What are the relevant artefacts that could be augmented to create new meanings for the people who interact with them?
Last week I read a 2004 paper called MapReduce: Simplified Data Processing on Large Clusters. It was written by a couple of Google researchers, and details a simple programming model and library for processing large datasets in parallel. MapReduce is used by Google under the hood for lots of different things, from indexing to machine learning to graph computation. Very handy indeed.
So imagine my surprise to find in last Friday’s edition of ACM TechNews that this paper has been republished in Communications of the ACM this month, albeit in a slightly shorter form. Aside from a few cosmetic changes (updated figure and table), the content of the papers is the same. That is, you don’t gain any knowledge from reading one of the papers that you wouldn’t gain from reading the other. There is no indication in the more recent publication that so much content has been duplicated from an earlier paper, though there is a citation to the older paper. In short, this is not new material, having been first published more than three years ago. Communications of the ACM seems to be trialling a new model, whereby the best articles from conferences are modified and republished for the ACM audience. But seriously, the modifications in the republished MapReduce article are negligible. What gives?
So Android has been released. As I suspected, Google has not actually released a phone of their own. Could be an interesting platform for researchers in the mobile/ubiquitous computing space who want to develop prototypes quickly. One of the creators of the platform hopes that someone develops an application that can help interpret his wife’s thoughts…
I’ve seen this video – about digital information and its categorisation – linked on various websites over the last week or so. I thought I’d share it here as well. Very nice.
Enough politics. Back to a more wholesome topic…
Here’s a photo of a dinner we held for Anind Dey at the Brasserie on the River a couple of weeks ago. The photo contains two of my previously mentioned ubiquitous computing inspirators.
Clockwise from the top right we have Jaga Indulska, Anind Dey, Karen Henricksen (Robinson), Ricky’s camera case, Pei Hu, Ryan Wishart, Myilone Anandarajah, Andry Rakotonirainy and Bob Hardian. The food was great and the conversation stimulating. A good night was had by all.
A few weeks ago, I discovered that IEEE Distributed Systems Online maintains a list of the key people in the field of mobile and pervasive computing. Here’s a much shorter list of people in pervasive computing whose work has inspired me. The list might be biased towards the sub-areas of ubiquitous computing with which I am more familiar, and in all cases, I acknowledge the involvement of Ph.D. supervisors and colleagues without explicitly mentioning them.
Often called the father of pervasive computing, he wrote the seminal paper on the topic (I know some people have their own views about this, but history will always see it this way).
Most important, ubiquitous computers will help overcome the problem of information overload. There is more information available at our fingertips during a walk in the woods than in any computer system, yet people find a walk among trees relaxing and computers frustrating. Machines that fit the human environment, instead of forcing humans to enter theirs, will make using a computer as refreshing as taking a walk in the woods. (The Computer for the 21st Century, 1991)
Dey provided the first useful (i.e., operational) definition of context in this field, and one of the first non-monolithic approaches to developing context-aware applications by way of the Context Toolkit (Schilit was perhaps the pioneer in that respect).
Context is any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves. (Understanding and Using Context, 2001)
While Dey provided the often-quoted definition of what context is, Henricksen filled in the details about the nature of context information in ubiquitous computing environments, and made one of the first real attempts to formally model it. Henricksen, in conjunction with her colleagues, also developed one of the most sophisticated approaches to engineering context-aware applications, beginning with modelling and ending with a set of programming abstractions. Henricksen and Indulska authored the Elsevier Journal of Pervasive and Mobile Computing‘s most downloaded article of the year from May 2006 to April 2007.
[Our] system will allow abstract models described in our notation to be mapped with little effort to corresponding implementation models that can be populated with context information and queried by applications. It will be responsible for a range of management tasks, such as integration of context information from a variety of sources, management of sensors and derived context, detection of conflicting information and so on. (Modeling context information in pervasive computing systems, 2002)
Chen and Kotz developed a novel platform, called Solar, for building context-aware applications. I found their approach particularly inspiring for what I would call its bottom-up approach. What excited me about their idea is the same thing that excited me about the DSTC’s Elvin protocol: the ability to quickly build an application by mashing up various sources of information.
A fundamental challenge in pervasive computing, then, is to collect raw data from thousands of diverse sensors, process the data into context information, and disseminate the information to hundreds of diverse applications running on thousands of devices, while scaling to large numbers of sources, applications, and users, securing context information from unauthorized uses, and respecting individuals’ privacy. (Solar: A pervasive-computing infrastructure for context-aware mobile applications, 2002)
Andys Hopper and Harter, Roy Want and others gave the world Active Badges, which were initially used to divert incoming phone calls to the nearest phone to the user. Active Badges soon gained a following in ubiquitous computing research centres around the world, with installations at MIT, Xerox PARC, EuroPARC and elsewhere. These researchers also showed remarkable awareness of the social impact their technology could have in the world. The honesty and openness with which they wrote their papers is something that ought to be replicated in more of the papers of the current generation. I’m sure this project has inspired many a ubiquitous computing researcher.
The most important result of this work is not, “Can we build a location system?”, but, “Do we want to be a part of a location system?” There is a danger that in the future this technology will be abused by unscrupulous employers. (The Active Badge Location System, 1992)
Nigel Davies, Adrian Friday, Gordon Blair, Keith Cheverst and maybe a few others have made a large contribution to the field. I remember reading their stuff – about mobility, adaptation, service discovery and more – around the year 2000 and thought it was fantastic. Their papers often disclosed important findings.
Interaction with a context-aware/location-aware system is not affected by the design of the user interface alone. In fact, interaction with GUIDE is, to a large extent, governed by the design of the infrastructure, i.e. the strategic placement of cells in order to provide appropriate areas of location resolution and network connectivity. (Developing a context-aware electronic tourist guide: some issues and experiences, 2000)
Although these guys aren’t strictly ubiquitous computing researchers, I find their work inspiring on a number of levels.
Tangible interactions can be more immediately familiar than ones we regularly use with our computers. (The Hills are Alive with the Sound of Interaction Design, 2007)
So, that’s my list. It’s short and sweet. As I said at the beginning of this year, I’d like to move my work more towards the HCI side of things, which means that if I were to rewrite this list in a year’s time, it might feature a different bunch of people (like Paul Dourish, perhaps).
If I was to enter your address and other personal details into an online application like Plaxo (an address book/calendar), and those details were leaked (or sold, for that matter – not that Plaxo would do that), how pissed would you be. Would you forgive me for storing your details in some third party database? If somebody used those leaked details to impersonate you, and they were caught, would I be liable for having entered your details into my online address book in the first place without getting your permission? I wouldn’t think twice about putting someone’s details into an electronic address book that resides on my computer or using an old-fashioned paper-based address book. But an online address book service could potentially store millions of address book entries put there by thousands of users, and it therefore becomes an attractive and worthwhile target for criminals.
The following video gives a taste of what ubiquitous computing researchers around the world are working towards. This video has particular relevance for the SAFE project, because it deals with an emergency scenario. It’s a professionally made video, and very interesting to watch. One hopes RUNES didn’t blow their whole budget on this! (To be fair, they’ve produced a bunch of code that can be downloaded from their web site.)
I don’t have a problem with someone filming another person on their mobile phone in a public area where it’s obvious that many people are watching, so why do I have a problem with surveillance cameras in workplaces and some other locations? Isn’t that inconsistent? What’s the difference? I think the answer is that the first scenario does not break social protocols. If you’re speeding down the highway, you obviously know that there are going to be tens, if not hundreds, of other people watching you. That somebody might capture your flaunting of the law on camera does not change the social protocol. That is, you know you are being watched by human eyes. The same goes for taking recordings during meetings. There are other people in the meeting and they’re going to hear what you say anyway. In the second scenario, the camera is always turned on, even when there are no other people around. This is the important difference. It is easy to forget that your actions might be caught on camera, even if the camera is in plain sight and there are notices everywhere warning that there are surveillance cameras in use. So a woman might adjust her bra, or a man might pick his nose, not realising in the instant that it’s all being caught on camera. You might begin to sing, forgetting that it’s being recorded. Despite technology, we are still very social animals. That means we think in terms of the other people who are physically around us, overlooking the fact that technology enables others to be present in spite of their separation in space and time.
At the QRL labs of NICTA, surveillance cameras have been installed for research purposes in the hallways. I’ve grown kind of used to them. But I do completely forget about them until I get up from my desk and see them hanging out of the ceiling in the hallway. If one were installed in my office where it could record me at my desk, I think it would similarly escape my attention until I turned around and looked at it. I wonder what kind of unconscious embarrassing behaviours the camera would record if it were trained on my desk?