Ben strikes back in the UbiComp saga by asking
Do I want to live in the distopian future of, to take a bad example, Minority Report, where I am addressed by name by advertisements for bland Japanese luxury cars (Ok, that’s probably not the actual example from Minority Report)? Does my backpack need to acquire new functions automagically? Does every can of baked beans need to contain a record of it’s entire production process?
The answer is probably no to all of the above. But none of the above is a requirement for UbiComp. UbiComp might make these sorts of things possible, but they are certainly not a necessary indicator that UbiComp has arrived.
The example Ben uses from Adam Greenfield about self-moulding beds (my bed moulds to me just nicely already) simply shows that no matter how contrived a UbiComp scenario one can think up, it’s just as easy to develop a scenario equally or more contrived in which the technology fails.
The point of the self-moulding bed example, I suppose, is that things break. Of course they do. When there’s a blackout, my computer crashes. The blue screen of death is still a regular occurrance if you use a certain operating system. Things always break for one reason or another. Yet, I’m not about to argue that traditional computing technology has not arrived.
The story about Prada from Fred’s House, which Ben linked in his previous post, is another example of something not working as it should. However, there are many examples where this technology is working. Wal-Mart and many of its suppliers are about to start using RFID tags to monitor and track goods as they move from place to place. The FDA has, as recently as last Monday, released policy guidelines that encourage the use of RFID tags on prescription drugs. At least one pharmaceutical company, Purdu Pharma, has already begun tagging some of its drugs.
I don’t think anyone has ever stated that UbiComp would be perfect and flawless and that nothing would go wrong. UbiComp is not Utopia. It’s just another computing paradigm.
There are certain enabling technologies that will allow the goals of UbiComp to be fulfilled. Ben mentions a couple of them: speech recognition and agents. Ben says that these are
a Long Way Off. The kind of speech recognition engine that doesn’t need to be trained on a particular voice, that has an unlimited vocabulary, and can infer intended meaning from context might be a long way off, but this kind of speech recognition is not required to bring about UbiComp. For example, I can already ask my little mobile phone to call home by talking to it. I regularly use this feature, because it’s by far the quickest way to dial. It’s nothing fancy; my mobile phone lacks the processing power to do anything like proper speech recognition, but it’s an example of something simple enabling technology that works. As for the agents. Well, A.I. might be a very long way off; but again, A.I. is not a necessity to bring about useful agent technology. The cool thing is that lots of neat things can be accomplished without true A.I. Check out this project, for example. Sure it’s an honours project, but it works on real phones. It does call redirection based on where you are, what devices you currently have access to and what you’re doing. It automatically switches profile depending on context. It does adaptive ringing, which means that if somebody really, really needs to talk to you because they’re calling every 30 seconds the ring tone will get louder each time the person calls.
Lots of interesting things can already be done. If they haven’t been deployed in the real world yet, they certainly will be within the next 10 years (although I’m betting the stuff I’ve just talked about or some variant of it will be deployed by some telco or another well before 2015).
So no, not even 90 years, Ben. Think about where computers were 90 years ago and where they are today. 90 years ago we had nothing more than Babbage’s "sketch" of the Analytical Engine. Now I can send e-mail over the Internet using my mobile phone.