Categories
Innovation

The PACE framework for context-aware computing

A long time ago, in a Cooperative Research Centre far, far away (well, actually, it used to be just across the road from where I’m writing this post, but it sadly met its demise), a small group of researchers worked on a ubiquitous computing project that came to be known as PACE: Pervasive Autonomic Context-aware Environments. This group produced a framework for context-aware computing, which was the subject of many research papers at Pervasive, PerCom, JPMC and elsewhere. For various reasons, the source code for PACE has only just now come out into the open. Yes, you can now download the PACE framework from SourceForge. Unfortunately, there won’t be a lot of support offered along with the code.

Categories
Innovation

Ben on ubicomp: spot on

True:

Often we seem to use the term Ubiquitous Computing to mean “computers everywhere” as if just having the hardware all over the place was a worthwhile end in itself.

But maybe a better meaning is “computing available when you want it in a way that makes sense for where you are and what you’re doing” which is much harder to do than “computers everywhere”.

Categories
Innovation

Augmented reality on your mobile: the next big thing?

It’s been a while in the making, but augmented reality on your mobile is just about here. And by that, I mean that these applications are available for your mobile phone, and it will only be a matter of time before they gain critical mass. So what am I talking about?

In the research space, among others I can refer you to iCam (2006) and MARA (2006) from researchers at Georgia Tech and Nokia respectively. iCam allows the placement of virtual sticky notes on objects in the physical world, through a mobile device. This is neat, since the sticky notes only appear to those whom you want to see them. A limitation of iCam is that, while placement of these sticky notes is very accurate, it only works indoors. MARA overlays information about the real world (and even the people in it if information about objects is being streamed from a central server) in real time.

The there’s this concept device from petitinvention, which takes the idea a few steps further. The user can see information about buildings and locations overlaid on the video stream from the mobile device’s camera. But the same tool can be used to select text from a piece of paper (like a newspaper). Essentially, it’s an augmented reality search tool.

In the commercial/start up realm, a couple of companies have been creating a bit of buzz. First there’s Enkin. Enkin has been developed for the Google Android mobile phone platform. It enables users to tag places and objects on Google Maps, and then to see these tags overlaid on the real world as you walk around with the phone. My favourite is Sekai Camera from Tonchidot. I’m not going to explain it. Just watch the video below. But note that even products on the shelves in shops are tagged in the virtual world and overlaid on the real world. And it’s a very social application.

There’s probably still all sorts of hurdles to overcome, but what a great presentation.

Categories
Innovation

Another tangible user interface

The GroupLab at the University of Calgary has published a technical report describing Souvenirs, a tangible user interface for sharing digital photos in the home environment. It is very similar in spirit to Bowl, which I’ve previously blogged. Souvenirs will be formally published in the Proceedings of the 2008 ACM Conference on Designing Interactive Systems.

Souvenir - the tagged rock and the image it is associated with

Image credit: Nunes, M., Greenberg, S. & Neustaedter, C. (2007) Sharing Digital Photographs in the Home through Physical Mementos, Souvenirs, and Keepsakes. Research Report 2007-875-27, Dept Computer Science, University of Calgary, Calgary, Alberta, Canada T2N 1N4. July.

Categories
Innovation

Finding a human need

I’ve been reading over old ubicomp papers in preparation for a new project at NICTA. So it was that I found myself reading “Charting Past, Present, and Future Research in Ubiquitous Computing“, by Gregory Abowd and Elizabeth Mynatt (whom, incidentally, should surely be listed among those ubiquitous computing researchers who inspire me – particularly Abowd, whose work I’ve followed since my Honours year in 2000, and whose books were often referenced in the HCI course I took a couple of years before that). One of the most important passages in that paper, to my mind, was tucked away in section 6.1.1, Finding a Human Need (the emphasis is mine):

It is important in doing ubicomp research that a researcher build a compelling story, from the end-user’s perspective, on how any system or infrastructure to be built will be used. The technology must serve a real or perceived human need, because, as Weiser [1993] noted, the whole purpose of ubicomp is to provide applications that serve the humans. The purpose of the compelling story is not simply to provide a demonstration vehicle for research results. It is to provide the basis for evaluating the impact of a system on the everyday life of its intended population. The best situation is to build the compelling story around activities that you are exposed to on a continuous basis. In this way, you can create a living laboratory for your work that continually motivates you to “support the story” and provides constant feedback that leads to better understanding of the use.

Designers of a system are not perfect, and mistakes will be made. Since it is already a difficult challenge to build robust ubicomp systems, you should not pay the price of building a sophisticated infrastructure only to find that it falls far short of addressing the goals set forth in the compelling story. You must do some sort of feasibility study of cutting-edge applications before sinking substantial effort into engineering a robust system that can be scrutinized with deeper evaluation. However, these feasibility evaluations must still be driven from an informed, user-centric perspective—the goal is to determine how a system is being used, what kinds of activities users are engaging in with the system, and whether the overall reactions are positive or negative. Answers to these questions will both inform future design as well as future evaluation plans. It is important to understand how a new system is used by its intended population before performing more quantitative studies on its impact.

It strikes me that too few ubicomp research groups heed this, seemingly obvious, advice, including our own. Though we might occasionally attempt to build a story, it is not often compelling, and I’ve read far too many papers that suffer from the same problem (caveat: I specifically exclude Karen‘s work from this blunt introspective analysis because her work is typically very well motivated, and compelling; and she read the paper I’ve just quoted early on in her Ph.D. and took note of it). I don’t think it’s a coincidence that the most successful ubiquitous computing researchers have taken this advice to heart. I want to make sure that the new project at NICTA does do these things properly.

Categories
Innovation

Cool projects by Johnny Lee

Johnny Lee from CMU’s HCI Institute has done some pretty cool things with the Wiimote. His Ph.D. project has also yielded some way cool stuff. Here’s just a few of the things he’s done on his own and with his colleagues.

Truly inspiring.

Categories
Innovation

Bowl: token-based media for children

Ben delicioused me a link to an interesting paper called “Bowl: token-based media for children“. It describes a media player that is controlled by placing various objects (tokens) into a bowl. The idea was to create a control interface that is easy for children to use and which establishes links between particular physical objects and digital media. Aside from being a really cool means for interacting with a media player, it would have to be one of the neatest uses of RFID that I’ve come across so far. The bowl (or rather the platform that the bowl sits on) is augmented with an RFID reader. The various objects are augmented with RFID tags. When an object is placed in the bowl, an associated piece of media plays on the screen. For example, when a Mickey Mouse doll is put into the bowl, a Mickey Mouse cartoon plays. In theory, various combinations of objects might also have meaning. The system might be configured so that if Mickey Mouse and Donald Duck are placed in the bowl, a cartoon featuring both these characters starts playing. The system becomes very social and conversational when homemade objects are augmented with RFID and linked to, say, home video clips or family photos, as demonstrated by the experiment reported in the paper.

I wonder what sorts of casual, natural interactions such as those induced by Bowl might make sense in the domain I’m working in? What are the relevant artefacts that could be augmented to create new meanings for the people who interact with them?

Categories
Innovation

Android – the open platform for mobile apps

So Android has been released. As I suspected, Google has not actually released a phone of their own. Could be an interesting platform for researchers in the mobile/ubiquitous computing space who want to develop prototypes quickly. One of the creators of the platform hopes that someone develops an application that can help interpret his wife’s thoughts…

Categories
Random observations

Meeting of the minds

Enough politics. Back to a more wholesome topic…

Here’s a photo of a dinner we held for Anind Dey at the Brasserie on the River a couple of weeks ago. The photo contains two of my previously mentioned ubiquitous computing inspirators.

Dinner for Anind

Clockwise from the top right we have Jaga Indulska, Anind Dey, Karen Henricksen (Robinson), Ricky’s camera case, Pei Hu, Ryan Wishart, Myilone Anandarajah, Andry Rakotonirainy and Bob Hardian. The food was great and the conversation stimulating. A good night was had by all.

Categories
Innovation

Ubiquitous Computing: People who inspire me

A few weeks ago, I discovered that IEEE Distributed Systems Online maintains a list of the key people in the field of mobile and pervasive computing. Here’s a much shorter list of people in pervasive computing whose work has inspired me. The list might be biased towards the sub-areas of ubiquitous computing with which I am more familiar, and in all cases, I acknowledge the involvement of Ph.D. supervisors and colleagues without explicitly mentioning them.

Mark Weiser

Often called the father of pervasive computing, he wrote the seminal paper on the topic (I know some people have their own views about this, but history will always see it this way).

Most important, ubiquitous computers will help overcome the problem of information overload. There is more information available at our fingertips during a walk in the woods than in any computer system, yet people find a walk among trees relaxing and computers frustrating. Machines that fit the human environment, instead of forcing humans to enter theirs, will make using a computer as refreshing as taking a walk in the woods. (The Computer for the 21st Century, 1991)

Anind Dey

Dey provided the first useful (i.e., operational) definition of context in this field, and one of the first non-monolithic approaches to developing context-aware applications by way of the Context Toolkit (Schilit was perhaps the pioneer in that respect).

Context is any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves. (Understanding and Using Context, 2001)

Karen Henricksen

While Dey provided the often-quoted definition of what context is, Henricksen filled in the details about the nature of context information in ubiquitous computing environments, and made one of the first real attempts to formally model it. Henricksen, in conjunction with her colleagues, also developed one of the most sophisticated approaches to engineering context-aware applications, beginning with modelling and ending with a set of programming abstractions. Henricksen and Indulska authored the Elsevier Journal of Pervasive and Mobile Computing‘s most downloaded article of the year from May 2006 to April 2007.

[Our] system will allow abstract models described in our notation to be mapped with little effort to corresponding implementation models that can be populated with context information and queried by applications. It will be responsible for a range of management tasks, such as integration of context information from a variety of sources, management of sensors and derived context, detection of conflicting information and so on. (Modeling context information in pervasive computing systems, 2002)

Guanling Chen

Chen and Kotz developed a novel platform, called Solar, for building context-aware applications. I found their approach particularly inspiring for what I would call its bottom-up approach. What excited me about their idea is the same thing that excited me about the DSTC’s Elvin protocol: the ability to quickly build an application by mashing up various sources of information.

A fundamental challenge in pervasive computing, then, is to collect raw data from thousands of diverse sensors, process the data into context information, and disseminate the information to hundreds of diverse applications running on thousands of devices, while scaling to large numbers of sources, applications, and users, securing context information from unauthorized uses, and respecting individuals’ privacy. (Solar: A pervasive-computing infrastructure for context-aware mobile applications, 2002)

The Cambridge Contingent

Andys Hopper and Harter, Roy Want and others gave the world Active Badges, which were initially used to divert incoming phone calls to the nearest phone to the user. Active Badges soon gained a following in ubiquitous computing research centres around the world, with installations at MIT, Xerox PARC, EuroPARC and elsewhere. These researchers also showed remarkable awareness of the social impact their technology could have in the world. The honesty and openness with which they wrote their papers is something that ought to be replicated in more of the papers of the current generation. I’m sure this project has inspired many a ubiquitous computing researcher.

The most important result of this work is not, “Can we build a location system?”, but, “Do we want to be a part of a location system?” There is a danger that in the future this technology will be abused by unscrupulous employers. (The Active Badge Location System, 1992)

The Lancaster League

Nigel Davies, Adrian Friday, Gordon Blair, Keith Cheverst and maybe a few others have made a large contribution to the field. I remember reading their stuff – about mobility, adaptation, service discovery and more – around the year 2000 and thought it was fantastic. Their papers often disclosed important findings.

Interaction with a context-aware/location-aware system is not affected by the design of the user interface alone. In fact, interaction with GUIDE is, to a large extent, governed by the design of the infrastructure, i.e. the strategic placement of cells in order to provide appropriate areas of location resolution and network connectivity. (Developing a context-aware electronic tourist guide: some issues and experiences, 2000)

Jack Schulze and Matt Webb

Although these guys aren’t strictly ubiquitous computing researchers, I find their work inspiring on a number of levels.

Tangible interactions can be more immediately familiar than ones we regularly use with our computers. (The Hills are Alive with the Sound of Interaction Design, 2007)

So, that’s my list. It’s short and sweet. As I said at the beginning of this year, I’d like to move my work more towards the HCI side of things, which means that if I were to rewrite this list in a year’s time, it might feature a different bunch of people (like Paul Dourish, perhaps).