Categories
Eco-philo-pol

Canberra: Safeguarding Australia Summit

I spent most of the week down in Canberra, where I attended the Safeguarding Australia Summit with Karen and a few other NICTA people. The summit consisted of a plenary stream, a NICTA stream, and a satellite technology stream. The last day of the summit was taken up by the Research Network for a Secure Australia (RNSA) Conference. A number of good speakers gave keynotes in the plenary sessions. Perhaps the most impressive talk was given by Deputy Assistant Commissioner Peter Clarke of the Metropolitan Police in the United Kingdom. His presentation covered a whole set of operations that the police carried out and are carrying out in relation to recent terrorist activities in the UK. For the most part, the keynote presentations avoided Left/Right political bias, but there were times, during the panel sessions, where political bias quite visibly crept in. One slightly uncomfortable moment arose when, during a panel session on “Homegrown Terrorism”, Ameer Ali, Chairman of the Muslim Community Reference Group, fielded a question from a Zionist lobby group about Hezbollah. However, during the same session, Federal Agent Frank Prendergast of the Australian Federal Police, gave what I thought was a very considered presentation on the role of the AFP in combatting terrorism within Australia, and the relationship of the AFP with the Australian mainstream Muslim community, who, for obvious reasons, are one of the community groups most directly affected by ongoing investigations into terrorism and so on.

The conference was quite different from what I’ve been used to in the past. The plenary stream was very interesting, but the technology streams were more or less a bunch of industry people trying to market their wares.

Categories
Random observations

Strong AI by 2029

Earlier this month, Ray Kurzweil presented a paper at the Dartmouth Artificial Intelligence Conference which proclaimed that strong AI will be possible within the next 25 years: 2029, specifically, is the year he’s suggested a machine will first pass the Turing Test. If he’s right, what a time to be alive! Even if he’s wrong by a few decades, centuries or millennia, we’ve still got a lot to look forward to in our lifetimes. Techniques pioneered by AI researchers have been finding their way into mainstream applications for years, and this trend will continue as computing power increases and researchers invent ever-smarter algorithms.

While I’m not sure strong AI will arrive quite as quickly as Kurzweil thinks it will, I’m firmly in the camp that thinks it will arrive one day. I see no reason to believe that the human brain (or any kind of “brain” for that matter) is endowed with some mystical property that provides its intelligence. Although Kurzweil’s timeframe seems a bit on the optimistic side, it will take only one or two propitious findings in the fields of computer science or neuroscience to catalyse AI research and bring the goal of strong AI much closer.

Can’t wait!

Categories
Random observations

Seamless and Seamful Computing

A few weeks back Ben and I met at Three Monkeys for a chat about work and life in general. Inevitably, our discussion turned to pervasive computing. Ben spoke about the idea of seamfulness as opposed to seamlessness, which I found interesting. He’s written a short blog entry about it. I think the quote in his blog entry is from Elizabeth Goodman’s blog. Here’s what she said:

Ubicomp-the-conference and ubicomp-the-field are frustrating because they promise the impossible. The promise of computing technology dissolving into behavior, invisibly permeating the natural world around us cannot be reached. Technology is, of course, that which by definition is separate from the natural; it is explicitly designed that way. Technology only becomes truly invisible when, like the myriad of pens sold in Japan’s department stores, it’s no longer seen as technology at all. Deliberately creating something ‘invisible’ is self-defeating. I can think of few recent technologies as visible to the public as RFID, no matter how physically ‘invisible’ it might be.

Yet again, confusion reigns supreme in the UbiComp debate. Much of this confusion stems from the notion of invisibility and the relative importance conferred to that concept. UbiComp doesn’t seek to make all technology invisible, no matter what it’s critics believe. What it does try to do is hide those aspects of technology that needn’t be visible to humans. Take the RFID example. For many applications, RFID is a better solution than older technologies like bar code scanners because it does away with user interaction in a situation where user interaction is not required. When Elizabeth Goodman talks of RFID being highly visible despite its small size, I can only assume she means it has gained wide exposure in the media and in research publications. But this is talking about a different kind of visibility! Any technology, when first deployed, is bound to attract attention. It’s what happens after the hype dies down and the novelty wears off that counts. The fact of the matter is that RFID is closer to being invisible than bar codes and the other technologies it will replace. That can hardly be argued with under the assumption that RFID works as it is supposed to (and given the uptake of the technology by logistics companies, e-ticketing systems and the like, it appears that it does work well for many applications). Furthermore, this talk of having computers blending with the natural environment is a complete red herring, designed to draw your attention away from one of the real goals of ubiquitous computing, which is simply to release users from explicit interaction with technology as far as it is possible and sensible to do within certain applications of technology. The extent to which this is possible and the degree to which it remains sensible is determined by the users themselves.

It strikes me that it is often those criticising UbiComp whom attribute the most marvellous traits to it. Of course the more outlandish the "promises" of UbiComp become, the easier it is for its critics to knock it down. Ubiquitous computing is just that: computing that has moved beyond the desktop and which becomes integrated with many aspects our everyday lives. The technologies that will enable this in a way that is satisfying to users are subject to ongoing research and debate within the UbiComp community.

Slowly but surely, some technologies are receding to the background. This will not happen for all computer technologies: users gain value from many technologies precisely because they are visible and their boundaries are well-defined. But by the same token, there is a large category of technologies which the user need not be conscious of. RFID is an example already given, but there’s also simple things like sensors that switch the airconditioner on when you arrive home from work, or the software that seamlessly re-routes a telephone call to an appropriate device nearby, doing away with the need for a human operator to redirect the call. These are all examples where their users do benefit from their invisibility and seamlessness. It will be users (the market) who decide the extent and rate at which technology recedes to the background and becomes part of our work, home, shopping and social environments, but it will certainly happen for some technologies.

For sure, seamfulness has its place too. Ben gave me this really cool example. There’s a game played with GPS- and Wi-Fi-enabled PDAs. There are two teams, and each team has a safe. The playing field is randomly seeded with virtual coins that are visible on the PDA screen. The object of the game is to move as many coins as possible from the playing field to the safe. You collect a coin by moving to its GPS coordinates. Coins can be stolen in transit (kind of like a mugging). But Wi-Fi suffers from gaps in coverage, especially when there are buildings and other objects around. The game works because players can take advantage of these black spots to sneak a coin back to the safe without getting mugged. Upon collecting a coin, a player can jump into a black spot and then emerge from it at a point closest to the safe. Thus the seams of the Wi-Fi technology can actually be of some value in certain applications. However nobody woud dispute that it is also highly useful to a much larger range of applications when it maintains invisibility and seamlessness, showing that there’s a place for seamlessness and seamfulness in the future of computing.

Categories
Random observations

CSIRO Job

Peter Corke at the CSIRO ICT centre in Pullenvale has asked me to spread the word about a position his group is looking to fill as soon as possible. In fact, he says ideally he’d like to start the successful candidate tomorrow if he could. Here are the details.

The job is within the sensor nets research program, and covers the following areas:

  • Sensor net applications
  • Routing in sensor nets
  • Data muling (i.e., collecting data from sensors and transporting that data via mobile nodes back to a base location)
  • Web backends
  • Data service integration

The position is a 12 month contract and attracts a mid-$60k salary. Please contact Peter directly if you’re interested. Otherwise, please tell your friends about this job.

Categories
Random observations

Kerry calls it quits

Today was the last day at the DSTC for one of its icons, Dr. Kerry Raymond (Distinguished Research Leader, Adjunct Prof., ITEE). All the best in the future Kerry!

Categories
Random observations

What a week!

Sometimes it seems as though months can go past without much happening, and then, all of a sudden, lots of things happen in the space of a few days. The past week has been jam packed with notable events.

It all began last Saturday, when Karen and I looked at nine houses in the Forest Lake area. The last house that we saw was the best of the lot. It was perfect for our needs, and we liked it very much. We visited another house on Sunday morning (which was not too bad, but a lot older and therefore in need of some patching up here and there) and then took our parents to see the one that we liked from the previous day. They liked it too. On Tuesday we made an offer and a price was agreed on Thursday. So, unless something disastrous happens in the next thirty days or so, Karen and I will be the proud owners of a three and a half years old house in Ellen Grove.

On Wednesday, I was offered a position as a research scientist at a new lab in the city. On Thursday I had a meeting with my current boss to inform him of the situation. I was due to travel to Newcastle, Rockhampton and Townsville in the space of three days next week to install our new product (deSide version 2) at various clients’ facilities, and then to Melbourne the following week to attend the Energy Users Association of Australia conference at the Grand Hyatt. Part of the reason for these travels was so I could meet our clients and vice-versa. Luckily there was enough time for Paul (my boss) to rethink who to send on these trips. The meeting I had with Paul was long, but he took the news as well as could be expected. I’ll be sorry to leave global-roam after such a short stay because I’ve thoroughly enjoyed my time there and learned a great deal about what it takes to develop software that people actually use on a daily basis, but happy to be stepping into a job that allows me to work on problems very similar to those I worked on during my Ph.D. candidature. I’m still to negotiate a finishing date with Paul, though I expect I’ll be staying on until sometime near the end of the year.

As if buying a house and having to tell my boss that I was leaving was not stressful enough, today was also the deadline for deSide development and testing. It’s all finished bar maybe ten percent of the installation file, which I’ll have to do tomorrow morning. Also, this week I’ve been setting up a Linux box to act as a gateway between global-roam’s LAN, Roam Consulting’s LAN (Roam Consulting is a sister company of global-roam whom we share office space with), a Cisco 877 ADSL router/modem, a Netgear ADSL router/modem and one other ADSL router/modem. I wrote a failover script so that the Linux box will switch between our primary Internet link (the Cisco modem) and our secondary link (the Netgear modem). For Roam Consulting, it’s the exact opposite. The script also had to take into account the fact that our SMTP server has to change depending upon which link is in use (we don’t have our own internal mail server). I set up a couple of dummy DNS zones and the failover script adjusts the zone files accordingly (i.e. changes the IP address associated with the hostname ‘smtp’ and increments the serial number for the zone). We’re putting each company (global-roam and Roam Consulting) onto its own subnet, separated by the Linux gateway. This means that we have also had to set up a Windows domain controller for our subnet.

So, that was my week. The weekend will be spent finishing the deSide setup file and then hunting for a reputable building and pest inspector. There are a couple I like the look of, and Karen also has some ideas of who to use, so it shouldn’t be too hard to arrange, assuming they’re not all booked out for the next few weeks. Tomorrow evening Karen’s parents are taking us out to dinner for a belated engagement celebration, which I’m really looking forward to. Also, I hope I find the thirty minutes I’ll need to touch up my responses to my thesis examiners’ reports. (Hmm, I suppose I could have been doing that now instead of writing this long blog entry). Finally, I’m looking forward to sleeping a bit. :-)

Categories
Random observations

Melbourne

I’ve just returned from Melbourne where I presented a paper at The 9th International Conference on Knowledge-Based Intelligent Information & Engineering Systems (KES 2005) and visited some relatives. The trip served to reinforce how much I like Melbourne. The conference venue was quite spiffy (Hilton on the Park), though there was lots of audio interference between some of the rooms. Conference attendees who registered as students (like me) didn’t get a copy of the proceedings in hard or soft copy (although I’m glad I didn’t get the hard copy, which came in four LNCS volumes), nor did their fee provide entrance to the cocktail party of conference dinner. This turned out okay because I found the really nice little restaurant that Karen and I went to last time (corner of Little Collins St and Block Place). There was an awesome mariachi band playing. Those guys totally rocked. The food was great too.

I saw the part of the National Gallery of Victoria that Karen and I didn’t get to see last time. There was a temporary exhibition of about 100 pieces from the Rijksmuseum in Amsterdam, but I’d already seen them all with Karen when we visited Holland earlier in the year. There was also an exhibition of some of Albrecht Dürer’s sketches. I was impressed by some of the pieces the gallery has managed to procure for its permanent collection. There were several paintings by Rubens (though Rubens was so prolific that his pieces seem to be everywhere) and Rembrandt as well as a Monet.

All in all a good trip. I didn’t get to meet Rhys because he’s galavanting around South East Asia on business again.

Categories
Random observations

Thesis returned

My thesis has come back to the thesis office. Jaga tells me the review was good and that there are only minor corrections to make.

Categories
Random observations

Defining context: enough already

These days I am invariably at odds with Ben’s take on ubiquitous computing and related issues, and the trend seems likely to continue.

In On Context, Ben argues that we still don’t have a clear understanding of what context, among other terms, means. Come off it, really. We’ve known what context means for centuries, and what it means within computer science for decades (which, when one looks at it, is not substantially different from what it means in the ordinary world). John McCarthy, helped to formalize context over a decade ago – a remarkable feat, and one that would be all the more remarkable if the meaning of context was still unclear at that time. McCarthy has also looked at formalizing common sense, a term that I believe is much harder to define than context. The problem is no longer figuring out what context is or what it means, it is understanding how to monitor (sense) it, model it, derive it, and use it to do something sensible even in the face of imperfect information. These are hard problems, and it strikes me as a distinct possibility that people are still talking about what context is because they are avoiding the difficult problems involved with utilising it. Not only do we already know what context is, we know what it is useful for; all you have to do is observe the way in which a typical conversation proceeds between two people and see how they fill in the blanks in the absence of explicit information. Or, the next time you see a person standing in the middle of an intersection wearing a blue uniform and waving their hands about in various ways, observe the way in which you and other motorists respond. It is unlikely that any motorist would make a call to the nearest hospital for the mentally ill. But if you saw a similarly dressed person making those gestures in the middle of a football field, that phone call might be made. Why? Context is the element that allows us to differentiate between these two scenarios or situations. In the words of Dey:

Context is any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves.

So how does an application decide what is relevant, without being pre-configured with a static set of entities that the programmer/administrator/user has deemed to be relevant to the scenarios in which the application will be used? Now there’s a real research problem. I think this is the gist of what the IFTF article to which Ben refers was getting at – that context is mutable and hard to track, not that the term itself is hard to define.

The meaning of context, I think, is well understood, and the fact that there are many slightly different definitions of context floating around in computer science does not imply otherwise, just as there are many ways of describing the term "light-bulb". So I have no hesitation in adding another definition to the mix. This definition approaches context from a more functional point of view:

Context is the implicit environmental parameter that can be used to disambiguate a statement or an action.

That goes for statements made in natural languages and programming languages. For instance, in an application, we may wish to send a message to somebody. Perhaps the only formal parameter provided by the user is the contents of the message. It is then up to the application to figure out who the message is supposed to be sent to and how that message is supposed to be sent. As mentioned above, knowing what information is relevant to the decision making process, and then actually utilising this information is the tricky part.

Context has been defined to death, and just in case it wasn’t dead already, I made sure by throwing another definition at it. The last thing we need is more papers purporting to confer a better understanding of context unto the computer science community. What is needed are solutions to the problems of modelling, incomplete information, filtering, scaling and the derivation of context information from existing context information. Basically we need to figure out how to remove the current limiting factors so that it can be applied more generally and with predictable results that make sense to the user.

Categories
Random observations

2003-11-20 09:14:50

Today I modified my query testing application so that it doesn’t require access to the Internet and uploaded it onto my phone via a data cable. The demo is not very spectacular. It reads in a query in the form of an XML file and an advertisement in the form of another XML file, and sees how closely they match. It reports back the total number of components in the query, the total number of components in the advertisement, and the number of query components that match the advertisement. An exact match occurs if the number of matching components is equal to the number of components in the query. The query can have arbitrarily complex expressions, and the expression language has some built-in functions. Here’s an example:

 <?xml version="1.0" ?>
 <component name="printer">
   <component name="resolution">
     <component name="dpi">
       100     </component>   </component>
   <component name="location">
     <component name="latitude">
       <exp>
         <![CDATA[(that < 44.5) && (that > 31.1235) && (that != 33.3)]]>
       </exp>     </component>
     <component name="building">
       <exp>
         ![CDATA[that.startsWith("GP")]]>
       </exp>
       <component name="level" value="6">
         <component name="room">
           <exp>
             <![CDATA[that == 633]]>
           </exp>
         </component>
       </component>
     </component>
   </component>
 </component>

I hope that shows up as it is supposed to. By the way, this is a completely made up example. The component names and values aren’t actually supposed to mean anything. This example has three expression components if I can count properly. These are the CDATA things that are surrounded by an <exp> tag. The expression language is dynamically typed (i.e. it figures out whether it’s dealing with a string, integer, float or boolean as it goes), and the operators take on different meanings depending upon what the types of the operands are. For example "<&quot behaves differently depending upon whether the operands are strings or numbers. The keyword that stands for the value in the advertisement. Therefore the expression that == 633 is equivalent to using the value attribute. So in the example above we could have simply specified <component name="room" value="633"> instead of using an <exp> element. An application would either have some predefined queries or would build these queries from some simple user input. So don’t go thinking that the user has to type in all this gobbledygook.

The best news of all is that the example actually runs several seconds faster on my phone as compared to the emulator for the phone. Class loading is what takes the longest.

I also worked a bit on the journal paper today. I think it’s ready to be shown to Ted and Jaga.