Categories
Eco-philo-pol

Of Thanksgiving Turkeys and Black Swans

A couple of months ago I finished reading The Black Swan (TBS) by Nassim Nicholas Taleb. I suspect I’ll read it again sometime.

In a nutshell, TBS is about (un)predictability, uncertainty and knowledge. Karen and the kids bought me the second edition of TBS for Fathers’ Day. It’s the one with a lengthy postscript essay, which I thought was arguably the best part of the book. I was happy to read in the postscript (p. 333) that the author appreciates my rather slow reading of his book.

Uncertainty, TBS explains, is predominantly an epistemic problem, one that is subjective and one that the social sciences ought not model with conventional Gaussian methods. The propensity for Nobel prize-winning economists to wield bell curves is the target of much of Taleb’s disdain. Black Swans are those rare, unpredictable events that mathematics has no business in attempting to predict (i.e., because they’re unpredictable. Duh!).

Taleb contends that the concepts of probability and randomness as they are taught in universities by bow tie wearing academics, and used by all manner of practitioners are wholly unsuitable for application in most non-physical domains, like economics, policy and risk management. These are typically domains which are dominated by, often cumulative, human action. Sometimes, Taleb explains, these systems can be more appropriately modelled with power laws or fractal mathematics, which can render Black Swans grey; but these models are not intended to provide the concreteness of the more commonplace methods with which we’re familiar. More often, these systems ought not be modelled at all, particularly not with sophisticated mathematics or equations taken from physics text books, as they are Black Swan prone, and impermeable to these approaches.

With uncertainty’s epistemic roots, Taleb spends some time discussing some important aspects of knowledge. Knowledge is biased both in terms of its distribution and its verification. Consider the Thanksgiving turkey: it is fed day after day, given a place to roost (is that what turkeys do?) and generally cared for, until one day, chop! The turkey couldn’t have suspected this was coming. It’s an event lying totally outside its experience. A Black Swan. The butcher, on the other hand, knew what was coming all along. Not a Black Swan. There is an imbalance of knowledge here, highlighting the subjective nature of uncertainty. We don’t need to look far to find examples of this kind of uncertainty, and massively consequential historical events that illustrate the disproportionate impact that Black Swans have.

Knowledge is also governed by the confirmation bias: no matter how many pieces of evidence are collected in support of some theory or idea, only one piece of negative evidence is required to refute it. This is the basis for the Popperian notion of falsification, which is itself fundamental to the way science proceeds.

A related idea is that of silent knowledge, cheekily termed the “Casanova problem” by Taleb. This reflects the observation that we only remember confirmatory instances, the successes, and rarely the failures. Just think about startup companies. Look at Company A. They’re so successful because they did X, Y and Z, so we should do the same. Of course, there may be a graveyard full of companies that did X, Y and Z, too. The silent evidence. Likewise, Casanova didn’t live to tell his tale because he was particularly clever or immunised against misfortune; rather, probability tells us that a small number of playboy types from that era would survive their ordeals and thus feel indestructible, and perhaps go on to write a book or two about their experiences. But we don’t hear about those other Casanovas, who weren’t quite so lucky. This problem tends to make us blind to the real course of history.

Taleb moves on to describe how social systems are currently modelled by social scientists, and it is here that he is especially scathing. Economics, particularly academic economics, is full of phonies, says Taleb. Run from anyone who tells you that Brownian motion or Heisenberg’s uncertainty principle can model human behaviour. Or, if you’re not the running type, put a mouse down the back of their shirt when they least expect it. These things don’t model true randomness or uncertainty; they model a very tame version of it. This is, he says, evidenced by the fact that our coffee cups don’t jump off our coffee tables. Yet, the equivalent of jumping coffee cups happens with relatively high frequency in social systems (e.g., stock market crashes).

Rather than find false safety in econometrics and other phony methods, writes Taleb, we should heed the advice of that intuitive economic philosopher, Friedrich Hayek. In Hayek’s view, it is impossible for a central planner to aggregate all the pieces of data required to make a meaningful forecast of the economy and to plan a priori. Rather, the interactions between the individual agents in the system, who each hold knowledge, often tacit knowledge, of their own, result in a coherent, self-organised system — what we might call society (though Lady Thatcher mightn’t call it such). One way of looking at this idea is that locals can integrate local knowledge in a way that a central planner never could. The difficulty in central planning has been met by economists with increasingly “scientific” methods, but this creeping scientism, as it was called by Hayek, is just making matters worse according to Taleb. It is the scandal of prediction. Medical empiricism, evidence-based medicine or clinical medicine, is perhaps the field to which economists should look for inspiration, rather than to physics. Physics, funnily enough, is for the physical world, where its methods and models apply, and where the Gaussian and related distributions are observable in reality. But its models are often inappropriate for the social world.

TBS presents an idea born of a rich body of existing literature, but perhaps nobody in the relevant fields has articulated their ideas as colourfully and passionately as Taleb. I will say that while his narrative is colourful, and while it’s generally comprehensible by the amateur reader (like me), I did find his rambling style a bit hard to digest at times; the book doesn’t flow as well as it could have. Taleb can also be rather self-indulgent at times. Nevertheless, this is one of those books that any thinking person should get a hold of and read. Gift it to someone as a late Christmas present. In fact, my dad scored a copy of it today (New Year’s Day, 2011) for “Christmas”, as my parents just arrived in Brisbane from Cairns. I’ve lent my own copy out to someone, and I hope she remembers to bring it with her next time she travels to Brisbane so that I might lend it to another interested reader.

Categories
Random observations

The Landing Pilot is the Non-Handling Pilot

The Landing Pilot is the Non-Handling Pilot until the “decision altitude” call, when the Handling Non-Landing Pilot hands the handling to the Non-Handling Landing Pilot, unless the latter calls “go-around”, in which case the Handling Non-Landing Pilot, continues Handling and the Non-Handling Landing Pilot continues non-handling until the next call of “land” or “go-around”, as appropriate.

In view of the recent confusion over these rules, it was deemed necessary to restate them clearly.

— British Airways memorandum, quoted in Pilot Magazine, December 1996 (and in The Pragmatic Programmer, which is where I read it).

Categories
Innovation

Programming Collective Intelligence

Programming Collective IntelligenceI’ve been reading a fantastic book written by Toby Segaran called Programming Collective Intelligence: Building Smart Web 2.0 Applications. I’m about two thirds of the way through, but it’s so good that I’m not going to wait until I finish reading it before blogging it. Essentially, it’s a recipe book for machine learning algorithms that you’re likely to find under the hood of many successful modern web sites: clustering, support vector machines, decision trees, simulated annealing, Bayesian classification and so on. The AI course at uni was a bit light on in terms of statistical machine learning techniques, but this book makes up for it. All the code in the book is written in Python and can be downloaded from the author’s website. The algorithms in the book may prove to be highly useful for my work in ubiquitous computing, too.

Coincidentally, according to the most recent entry in his blog, Toby will be giving a talk on a topic sort of related to one I’ve been thinking about as a possible project at NICTA: Creating Semantic mashups: Bridging Web 2.0 and the Semantic Web.

It turns out that Toby is also a fan of GTD, and he’s written his own web based GTD tool. It doesn’t look much, but it’s gained some favourable reviews.

Categories
Random observations

The Book Depository

Karen came across a fantastic online book store called The Book Depository. Its prices are highly competitive. But the best part is that they offer free shipping worldwide. We’ve already ordered five books from them: That’s Not My Puppy; That’s Not My Lion; Dear Zoo; The Art of the Start; and Programming Collective Intelligence. They arrived separately, but all within a week, in well padded packaging. One word of caution: be sure to visit the right web site. It’s http://www.bookdepository.co.uk/, NOT http://www.thebookdepository.co.uk/.

Categories
Random observations

Apple crack

If Apple went bust, people would have withdrawal symptoms. If a rival company went, people would buy another computer.

Categories
Innovation

Finding a human need

I’ve been reading over old ubicomp papers in preparation for a new project at NICTA. So it was that I found myself reading “Charting Past, Present, and Future Research in Ubiquitous Computing“, by Gregory Abowd and Elizabeth Mynatt (whom, incidentally, should surely be listed among those ubiquitous computing researchers who inspire me – particularly Abowd, whose work I’ve followed since my Honours year in 2000, and whose books were often referenced in the HCI course I took a couple of years before that). One of the most important passages in that paper, to my mind, was tucked away in section 6.1.1, Finding a Human Need (the emphasis is mine):

It is important in doing ubicomp research that a researcher build a compelling story, from the end-user’s perspective, on how any system or infrastructure to be built will be used. The technology must serve a real or perceived human need, because, as Weiser [1993] noted, the whole purpose of ubicomp is to provide applications that serve the humans. The purpose of the compelling story is not simply to provide a demonstration vehicle for research results. It is to provide the basis for evaluating the impact of a system on the everyday life of its intended population. The best situation is to build the compelling story around activities that you are exposed to on a continuous basis. In this way, you can create a living laboratory for your work that continually motivates you to “support the story” and provides constant feedback that leads to better understanding of the use.

Designers of a system are not perfect, and mistakes will be made. Since it is already a difficult challenge to build robust ubicomp systems, you should not pay the price of building a sophisticated infrastructure only to find that it falls far short of addressing the goals set forth in the compelling story. You must do some sort of feasibility study of cutting-edge applications before sinking substantial effort into engineering a robust system that can be scrutinized with deeper evaluation. However, these feasibility evaluations must still be driven from an informed, user-centric perspective—the goal is to determine how a system is being used, what kinds of activities users are engaging in with the system, and whether the overall reactions are positive or negative. Answers to these questions will both inform future design as well as future evaluation plans. It is important to understand how a new system is used by its intended population before performing more quantitative studies on its impact.

It strikes me that too few ubicomp research groups heed this, seemingly obvious, advice, including our own. Though we might occasionally attempt to build a story, it is not often compelling, and I’ve read far too many papers that suffer from the same problem (caveat: I specifically exclude Karen‘s work from this blunt introspective analysis because her work is typically very well motivated, and compelling; and she read the paper I’ve just quoted early on in her Ph.D. and took note of it). I don’t think it’s a coincidence that the most successful ubiquitous computing researchers have taken this advice to heart. I want to make sure that the new project at NICTA does do these things properly.

Categories
Innovation

Bowl: token-based media for children

Ben delicioused me a link to an interesting paper called “Bowl: token-based media for children“. It describes a media player that is controlled by placing various objects (tokens) into a bowl. The idea was to create a control interface that is easy for children to use and which establishes links between particular physical objects and digital media. Aside from being a really cool means for interacting with a media player, it would have to be one of the neatest uses of RFID that I’ve come across so far. The bowl (or rather the platform that the bowl sits on) is augmented with an RFID reader. The various objects are augmented with RFID tags. When an object is placed in the bowl, an associated piece of media plays on the screen. For example, when a Mickey Mouse doll is put into the bowl, a Mickey Mouse cartoon plays. In theory, various combinations of objects might also have meaning. The system might be configured so that if Mickey Mouse and Donald Duck are placed in the bowl, a cartoon featuring both these characters starts playing. The system becomes very social and conversational when homemade objects are augmented with RFID and linked to, say, home video clips or family photos, as demonstrated by the experiment reported in the paper.

I wonder what sorts of casual, natural interactions such as those induced by Bowl might make sense in the domain I’m working in? What are the relevant artefacts that could be augmented to create new meanings for the people who interact with them?

Categories
Innovation

MapReduce

Last week I read a 2004 paper called MapReduce: Simplified Data Processing on Large Clusters. It was written by a couple of Google researchers, and details a simple programming model and library for processing large datasets in parallel. MapReduce is used by Google under the hood for lots of different things, from indexing to machine learning to graph computation. Very handy indeed.

So imagine my surprise to find in last Friday’s edition of ACM TechNews that this paper has been republished in Communications of the ACM this month, albeit in a slightly shorter form. Aside from a few cosmetic changes (updated figure and table), the content of the papers is the same. That is, you don’t gain any knowledge from reading one of the papers that you wouldn’t gain from reading the other. There is no indication in the more recent publication that so much content has been duplicated from an earlier paper, though there is a citation to the older paper. In short, this is not new material, having been first published more than three years ago. Communications of the ACM seems to be trialling a new model, whereby the best articles from conferences are modified and republished for the ACM audience. But seriously, the modifications in the republished MapReduce article are negligible. What gives?

Categories
Eco-philo-pol

Book review: The Upside of Down

Albert Einstein once said In the middle of every difficulty lies opportunity. I suppose it is this observation that lies at the heart of The Upside of Down: Catastrophe, Creativity, and the Renewal of Civilization, a book by Thomas Homer-Dixon. I was really looking forward to reading this book, having read an interview that New Scientist did with its author a while ago. So interested was I to read it, that I sent the author an e-mail asking him if it would be published in Australia, and if so, when. To my surprise, not only did I get an immediate reply from Homer-Dixon telling me that he was in negotiations with an Australian publishing company, he was kind enough to send me another e-mail months later when the book was finally launched in this country. I bought myself a copy immediately.

The Upside of Down identifies five “tectonic” stresses that our world is facing. These stresses relate to population, energy, environment, climate and economics, and they can combine with multipliers – the major ones being the rising connectivity of our world and the increasingly disproportionate power of small groups of people who may wish to do horrible things – to cause synchronous failure, or the kind of catastrophic collapse of our civilisation from which it would be hard or impossible to recover.

Homer-Dixon also espouses an interesting theory about the role played by energy return on investment (EROI) in the sustainability of a civilisation, and illustrates this theory using the Roman Empire as an example. In brief, civilisations become harder to sustain as the ratio between the energy expended to generate energy and the generated energy itself grows smaller. The Romans were dependent on food-based energy sources: man- and animal-power drove the Roman economy, and these fuelled by grain and so forth. The Romans exhausted large swathes of agricultural land, and as it did so, its EROI became smaller and smaller. Our civilisation is, of course, locked into an oil-based economy, and it’s not clear how far away we are from serious oil shortages.

The book also discusses panarchy as it relates to systems theory. Panarchy essentially describes the adaptive cycles of growth, accumulation, restructuring, and renewal of any complex system. There comes a time in the system when restructuring or collapse is inevitable. In naturally occuring systems the collapse is followed by a period of adaptation and creativity in which the system slowly regains its complexity. Homer-Dixon feels that in human systems, however, the growth phase may be artificially prolonged, resulting in a much more devastating collapse.

Overall, the book is a thoroughly interesting read. Unfortunately, though, its penultimate and concluding chapters let it down badly. The reasons for this are numerous. For starters, Homer-Dixon fails to offer any serious solutions. We don’t know what the breakdown of our societies will look like, he says, but we can still figure out how we might respond. And how might we do this?

In vigorous, wide-ranging, yet disciplined conversation among ourselves, we can develop scenarios of what kinds of breakdown could occur. In this conversation, we shouldn’t be afraid to think “outside the box” – try to imagine the unimaginable – because in a non-linear world under great pressure, we’re certain to make wrong predictions if we just extrapolate from current trends.

Although I don’t disagree with this per se, it is hardly the kind of visionary advice I was expecting from this policy wonk, whom, apparently, politicians around the world take quite seriously.

But it wasn’t just the lack of any real conclusions that left me cold. It was also the existence of several contradictions and a misunderstanding of some of the key scientific and economic theories underpinning the book.

First, Homer-Dixon seems to give more weight to the likelihood of targeted attacks as opposed to random failures, and therefore argues we ought to avoid scale-free networks as far as possible. Scale-free networks are prevalent thoughout nature and our societies, and they exhibit the property that their link distribution adheres to a power-law (i.e., a few nodes are highly connected whilst most others have only a small number of connections). This means that when a random node or link fails, the probability of major disruptions to the overall network is small. But if a few hubs (highly connected nodes) fail or are deliberately targeted, the effects can be disastrous. Yet there is little evidence to suggest targeted attacks are more likely than random failures, and he provides no indication that this is the case.

Second, the concluding chapters level serious criticisms at capitalism and markets, pointing the finger at the growth imperative and the widening gap between the richest and poorest people. While the latter is clearly true, it is also clear that capitalism is responsible for rescuing more people from poverty than any other system or sustained effort to date. As for the growth imperative, Homer-Dixon seems automatically to assume that growth comes only as a result of plundering the Earth’s resources, when in fact modern economic theory suggests growth actually comes from increases in efficiency and productivity. He also ascribes to capitalism the failures and poor choices of governments. Because western economies are well managed in the short term, they provide less opportunity for small collapses and the innovation that follows these collapses, which he acknowledges. But what would happen if governments had less to do with economic management? This is not a problem with free markets and capitalism but of its manifestation in the presence of governments who must optimise their policies to the short term in order to be re-elected.

As for the contradictions, while with one stroke of his pen he berates scale-free networks, in the next he praises the adaptivity of the World Wide Web, which, of course, is scale-free. The web, he says, is unstable enough to create unexpected innovations but orderly enough to learn from its failures and successes, and provides a shining example upon which other structures should be modelled. In addition, while he makes his opposition to modern capitalism abundantly clear, he praises market systems for their remarkable adaptivity! And while arguing for more bottom-up adaptive processes, he simultaneously calls for larger governments and more intervention on their part.

I conclude this review with a quote from the book which, to my mind, highlights the muddle-headed conclusions the author draws from his voyage through Roman history, panarchy, and the interesting theory of energy return on investment, and seems completely contrary to the bottom-up adaptive systems he argues we should strive for:

Any kind of new democracy must encompass not only communities, towns, cities, and societies, but humankind as a whole. In fact, it’s hard to imagine how we’ll prosper together on this tiny planet if we don’t eventually have some kind of democratic world government.

Categories
Random observations

Anna gets seriously hooked up

Anna and Will seem to be settling in nicely in Japan. They’ve just had an optical fibre connection installed: 100Mbps (in theory). Sweet! I guess we can expect some more regular blogging by Anna from now on (wink wink, nudge nudge).