The Landing Pilot is the Non-Handling Pilot until the “decision altitude” call, when the Handling Non-Landing Pilot hands the handling to the Non-Handling Landing Pilot, unless the latter calls “go-around”, in which case the Handling Non-Landing Pilot, continues Handling and the Non-Handling Landing Pilot continues non-handling until the next call of “land” or “go-around”, as appropriate.
In view of the recent confusion over these rules, it was deemed necessary to restate them clearly.
— British Airways memorandum, quoted in Pilot Magazine, December 1996 (and in The Pragmatic Programmer, which is where I read it).
Tag: tech
Queensland Rail will be offering south-east Queensland commuters free wireless access to the internet from early 2010, according to the Minister for Transport, Rachel Nolan. This access will use spare capacity on the infrastructure used to transmit real-time video footage from surveillance cameras to QR’s control room at Central Station.
One thing from that story that caught my attention was this:
She (Rachel Nolan) said people living near train lines or stations would not be able to tap into the free internet service because it would be “firewalled”.
That would have to be one pretty intelligent firewall! Here are some actual possibilities to guard against free-loaders. One not so attractive way to do it would be to set a limit on daily downloads. The theory is that there’s only so much you could download on the longest possible trip on the QR network in south-east Queensland (say, Gold Coast to Nambour, or something like that). The other more attractive solution, in my opinion, would be to tie usage to go cards. Your internet session starts when you swipe on at the beginning of your journey, and it finishes when you swipe off. There’d be some kind of web-based login procedure like you get at hotels and elsewhere, where you enter your go card number to gain access; or regular users could have the option of registering the MAC address of their wireless card with QR/Translink to skip the login procedure. Given that it still takes ages for a credit card top up to find its way onto my go card, I don’t hold out much hope for QR/Translink being able to implement this particular solution within the already very optimistic time frame of early 2010. But I do think it’s a reasonable long term solution. It might even help Translink in their quest to move more commuters over to the go card from paper tickets.
There are so many sites offering suggestions on how to get the mysql rubygem working on Mac OS X Leopard. None of them worked for me. Here’s how I got the gem installed.
After attempting to install the gem normally, with sudo gem install mysql
(which bombs out), go into /Library/Ruby/Gems/1.8/gems/mysql-2.7 or wherever it tried to install the gem. Add the line
#define ulong unsigned long
near the top of the file mysql.c.in.
Repackage the gem by issuing sudo gem build mysql.gemspec
Then install with
sudo env ARCHFLAGS="-arch i386" gem install --local mysql-2.7.gem -- --with-mysql-config=/usr/local/mysql/bin/mysql_config
You’re done. You may not need to wrap the gem install
command with the env
command.
It’s been a while in the making, but augmented reality on your mobile is just about here. And by that, I mean that these applications are available for your mobile phone, and it will only be a matter of time before they gain critical mass. So what am I talking about?
In the research space, among others I can refer you to iCam (2006) and MARA (2006) from researchers at Georgia Tech and Nokia respectively. iCam allows the placement of virtual sticky notes on objects in the physical world, through a mobile device. This is neat, since the sticky notes only appear to those whom you want to see them. A limitation of iCam is that, while placement of these sticky notes is very accurate, it only works indoors. MARA overlays information about the real world (and even the people in it if information about objects is being streamed from a central server) in real time.
The there’s this concept device from petitinvention, which takes the idea a few steps further. The user can see information about buildings and locations overlaid on the video stream from the mobile device’s camera. But the same tool can be used to select text from a piece of paper (like a newspaper). Essentially, it’s an augmented reality search tool.
In the commercial/start up realm, a couple of companies have been creating a bit of buzz. First there’s Enkin. Enkin has been developed for the Google Android mobile phone platform. It enables users to tag places and objects on Google Maps, and then to see these tags overlaid on the real world as you walk around with the phone. My favourite is Sekai Camera from Tonchidot. I’m not going to explain it. Just watch the video below. But note that even products on the shelves in shops are tagged in the virtual world and overlaid on the real world. And it’s a very social application.
There’s probably still all sorts of hurdles to overcome, but what a great presentation.
Many Eyes
I’ve been playing with Many Eyes from IBM Alphaworks. It’s a visualisation tool for data sets of various sorts. To test it out, I uploaded my Olympic medals per kilotonne of carbon emissions data sets. You can see the data sets here and here, and the resulting bar charts here and here, respectively (Java required).
I’m still ahead. But would you believe that, on the very day of my previous post about the go card, the go card machine in the bus failed to work when I needed to get off the bus at Forest Lake? How’s that for coincidence? Exactly the same thing happened again last night when trying to touch off.
What happens is this. I get on the bus at Indooroopilly, touch on, the light goes green and says something about a continuing journey. This is correct, as I change buses at Indooroopilly on my way home from NICTA’s new location at UQ. But then, as I’m exiting the bus at Forest Lake, the go card machine says “Please wait…”. I walk down the front of the bus to speak to the driver, who tells me (on both occasions) “but the machine wasn’t working at the start of the route, so you don’t need to touch off.” I tell him, “No, at Indooroopilly it was working. The light went green and everything was normal.” Rather than holding up the other passengers any longer, I just hop off the bus and call TransLink to make sure I’m not overcharged (except that I haven’t got around to doing that this time).
The thing that irks me even more is the consistent lateness of the 460. It wouldn’t be so bad if it was always late by the same amount, but it’s not. Although my Twitter page records the many occasions the 460 has been late in the evening, it’s my morning trip that really frustrates me. I have not known 8:10am 460 from Forest Lake E to run within 10 minutes of its scheduled time ever since I’ve been catching it. This is quite unbelievable given that the bus is supposed to start at Inala at 7:58am, and couldn’t possibly be getting caught in traffic between there and my stop.
I think overall Brisbane’s public transport is improving. But, jeez, it still sucks so badly, and I’m not sure it’s keeping up with the growth of our population.
Macs are increasing their share of the personal computing market, and Aussies are leading the charge: in the last quarter, Mac sales grew at a whopping 52% in Australia. Overall, Macs are still way behind, at about 3.5% of the global market. But apparently that’s double what it was five years ago.
Microsoft without Gates
Bill Gates has retired from Microsoft. This will be a turning point in the industry. Specifically, I think Apple will make huge inroads in the desktop market with Mac OS X to the detriment of Microsoft and Windows. Microsoft will eventually focus on the server side of the business. What do others think?
Startup: an explanation
It’s probably time to come clean about my recent spate of posts on startups, Ruby, Python and so on. Well, there are a few things about peer review and publishing in the realm of academia that I think could be better, so I tried to figure out an alternative process that retains the benefits and overcomes some of the problems of the current system. We think we’ve done that, and it turns out that I wasn’t the only one who thought that things could be a lot better.
NICTA has provided pre-seed funding in the form of a couple of commercialisation grants to implement this new way of doing things. I’ve hired a top notch graduate software engineer (who’s been working with me as a student for the past year and a half on unrelated things) to help me deliver alpha and beta versions of this system over the next six months or so. For this project, we’ll be working in startup mode; I’ll be making every effort to provide a small company atmosphere for the engineer and others who join the project.
It turns out the solution to the problem can also be applied to (web) search, since it is essentially a nice way of ranking documents within communities. I can’t go into the details of the solution here, but I can list some of the things that I (and other researchers, as it happens) think could be better.
- Traditional peer review requires that authors trust reviewers to act in good faith – reviewers are not required to “put their money where their mouth is”, so to speak;
- Related to the above, traditional peer review gives no real incentive to support the good work of a group competing scientists;
- Related to the above, traditional peer review provides no real incentive not to support the poor work of a colleague or friend;
- Traditional peer review gives no tangible recognition to the many hours of reviewing that scientists do – reviewing is just something you’re expected to do for the good of the scientific community;
- Traditional peer review gives no incentive to authors to self-review their work before submission, meaning reviewers get burdened with too many bad and mediocre papers;
- Metrics such as H-index and G-index are somewhat arbitrary, do not give a direct indication of the esteem with which scientists are held by their peers, and are not indicative of the current capacity of a scientist to produce good work;
- Citation collusion is too easy to accomplish, but difficult to filter out when calculating the above metrics;
- Not enough cross-fertilisation between fields, largely because closed communities are too common; and
- The publication process is too slow, often taking years for a journal paper and months for a conference paper.
These are some of the problems that researchers say they can see with the current way of doing things. We think we can claim that our idea solves many of these problems. For example, under our system, which we are calling PubRes for the moment, citation collusion is futile. Under PubRes, you’d also be silly to lend support to a paper that you know isn’t very good (even if it is written by a colleague), and you’d be silly not to lend support to a good paper (even if it is written by a competing group of scientists or your worst enemy). There are some things we haven’t solved, like honorary authorship and ghost authorship, but these are problems I’d like to investigate in the future. Although I can’t reveal the details here, I can say that the underlying mechanics of PubRes are no more complicated than traditional peer review procedures (and probably much less complicated), but it is a major departure from how things are done now. I can also say that the feedback we’ve got from people we’ve explained it to has been overwhelmingly positive, which is the main reason I’m still pursuing this.
NICTA are making sure we do this properly, so some of the grant money is being spent on figuring out the structure of the academic publishing market. We already know that the top three academic publishers had combined 2007 revenues in excess of $US3 billion, but that doesn’t say much. We’re currently doing some much deeper market research to get a better understanding of the domain.
It’s important to note that what we’re doing is completely different to all known attempts to bring science to the web. PubRes is not another CiteULike or Connotea. It’s not another arXiv.org. It’s not like PLoS One or PubMed Central. It’s different to ResearchGATE and Science Commons. While our implementation may contain elements of these existing tools, PubRes is a fundamentally new way of getting your research published, and it’s a new, much fairer (we think), more direct way of rating scientists and the papers that they write. One of our aims is also to make the whole reviewing, publishing and reading cycle a lot more fun.
With any luck, a public beta will be available early next year. Oh, we think we’ve settled on Ruby and Ruby on Rails for the web tier, and no doubt there’ll be some AJAX stuff in there to pull off a few nifty browser side features we have in mind. Stay tuned.
In the bits of spare time I get here and there, I’ve been continuing my hypothetical hunt for a language and web framework in which to implement my hypothetical "web 2.0" idea. It occurs to me that if all these little bits of spare time were clumped together so that I could, hypothetically, do some actual coding as opposed to “investigating”, hypothetically I’d be well on my way to having a hypothetical working system by now. But, alas, little bits of time here and there is all I’ve got at the moment.
Anyway, after having checked out Python and Django and deciding that I’d be happy enough with that set of tools, I thought I’d better check out Ruby and Ruby on Rails to see what all the fuss is about. Well, I have to say, so far I really like the Ruby language. I’ve been helped along by what has to be the weirdest, but coolest, guide to programming that has ever been written for any language (what other programming guide comes with cartoon foxes and its own soundtrack?). I’m still learning the ins and outs of Rails, but there are some very helpful tutorials online for this, too. The fact that Ruby, like Python, comes pre-installed in Leopard was a pleasant surprise. Ruby comes with a command shell environment called irb
(Interactive Ruby), which enables you type Ruby code at the command prompt (again, just like Python’s python
command line tool). This makes it very easy to experiment with the language.
One of the things I like about Python is list comprehensions. They’re a very neat and convenient way of mapping a list to a new list by applying a function to each element of the original. It kind of works like the map
function in many other languages, except you can include conditional statements. The heavy use of list comprehensions in Programming Collective Intelligence tells me that there’s a good chance they’ll come in handy for me later on. Here’s a trivial example:
>>> list1=[1,2,3]
>>> list2=[x**2 for x in list1 if x%2==1]
>>> list2
[1, 9]
In Ruby there’s no syntactic sugar for list comprehensions, but it turns out you can pretty easily implement the required behaviour:
>> list1=[1,2,3]
=> [1, 2, 3]
>> list2=list1.map {|x| x**2 if x%2==1}.compact
=> [1, 9]
Furthermore, since classes are never “sealed” or “final” in Ruby, it means that you can do something like this:
>> class Array
>> def comprehend( &block )
>> block ? map( &block ).compact : self
>> end
>> end
We’ve just added another method to the (existing) Array class which works very much like a list comprehension:
>> list1.comprehend {|x| x**2 if x%2==1}
=> [1, 9]
That’s a little of what I’ve learned about Ruby in the past week or so. Anyway, I can say that I’ve narrowed down my search to two candidates. Python has the lead in terms of being a mature technology. But Ruby really is fun to program in, and I like its syntax better than Python’s. Furthermore, I spent a while faffing around with Django and mod_python for my Mac. But getting Rails up and running was a breeze using the Mongrel web server – a production quality web server for Ruby applications, used by many web sites including Twitter. Ultimately, my (hypothetical) first hire gets the final say. :-)