Tuesday, December 2, 2014

What is Said, What is Unsaid

Political correctness, like all successful forms of social censorship, tends to go through two phases.

There is a loud phase, where average people are actively confronted about why they can't say those mean things any more. Sanctimonious and humourless scolds, buoyed with righteous indignation, delight in vocally complaining about the oppressiveness of some other other innocuous set of jokes and observations. Most of the populace doesn't have much of a dog in the fight, so just goes with the default position of trying not to offend people, and accedes to the request. A smaller group of contrarian reactionaries fights a rearguard action of ridicule and stubborn insistence on the status quo, but usually knows it's a losing battle. Cthulu swims left, after all, so you may as well get on the right side of history.

This phase is the part that everyone remembers.

But after that, there's a quiet phase of political correctness too. Once the kulaks have been beaten into obedience and the new list of proscribed words and ideas becomes the reigning orthodoxy, what's left behind is the silence of the things that used to be said but now aren't. It lingers a while, like a bell that's been struck, and then gradually fades to nothing. Once this happens, it's easy to forget that it was ever there. What goes unsaid long enough eventually goes unthought, as Mr Sailer put it.

And the only way to see what's gone is to look at the past, and see what used to be said but now isn't.

In the case of political correctness, since it's a relatively recent phenomenon, you don't even need to go that far in the past.

Movies are a great example of this. It's a useful exercise to consider which classic movies from the past couldn't get made today.

Some ones are kept around in the public consciousness as examples of how wicked we used to be - everyone knows you can't make Birth of a Nation any more, but very few people today would even want to. Other movies are partially excused because of their cinematic value, although it's well understood that nobody should get the wrong idea - Gone With the Wind, for instance (1940, 8.2 out of 10 on IMDB). People still like that movie, but nobody would imagine that the current script, with its copious references to 'darkies' would get through even the first read-through at a studio. But this was made a long time ago, so they should get some credit for their good intentions - it was progressive for refraining from using the word 'nigger' and including the word 'damn', both choices of which proved surprisingly far-sighted. So Gone With the Wind gets a partial pass, like a racist Grandma that people still find lovable as long as you don't get her on the subject of the Japanese or crime in America.

But interestingly enough, there are some modern examples too, that people still think of fondly, but couldn't get made today.

Rain Man, for instance (made in 1988, 8.0 on IMDB), will never ever have a sequel or a reboot. It is inconceivable that you could make it today. The entire premise of the movie is that Dustin Hoffman is an autistic savant, and Tom Cruise is his intolerant brother who needs to transport him across the country in order to get access to an inheritance. The premise of nearly every joke is Dustin Hoffman's odd and innocent behaviour, and Tom Cruise's aggressive, cynical and frustrated ways of dealing with it. (Sample quote, yelled at Dustin Hoffman's head: "You can't tell me that you're not in there somewhere!).

As it turns out, the portrayal of Dustin Hoffman's character is actually rather sympathetic - while a lot of the jokes involve the absurdity of his behaviour, at least part of it is about Tom Cruise being a complete insensitive dick about it all, so the broad message is certainly not that it's hilarious to make fun of autistic people. But that wouldn't stop the autism activists having a fit if it were made today. It wouldn't get greenlit, it wouldn't get seriously discussed, it wouldn't get through the first glance through of a script reader, and because everyone knows this, it wouldn't get written in the first place.

Or take Silence of the Lambs (made in 1991, 8.6 on IMDB). The offending premise here is a little bit more subtle - the serial killer Buffalo Bill, whom the protagonists are hunting, is a man who kills and dismembers women in part because he was frustrated at his inability to become transsexual. That is to say, he wanted to become a woman as a result of childhood abuse (because why else would you want to become a member of the opposite sex if your thinking wasn't deranged for some good reason), but he was denied a sex change operation due to said abusive circumstances. It's taken as a fairly straightforward premise of the movie that the desire to amputate one's genitals and attempt to become a member of the opposite sex was, prima facie, an indication of likely mental illness. Hence it's not surprising that he would wind up killing women as part of his sexual confusion and jealousy.

These days, it's a mark of bigotry to even raise questions about whether transsexuals should be allowed to use women's bathrooms, or compete in womens MMA tournaments.

If the movie were being rewritten today, Buffalo Bill would probably be a killer driven by misogyny, and his evil childhood influences would be rejection by women and too much reading of the manosphere. THAT will make you kill people! Transsexuals are just fine, in fact they're better than that, they're almost a protected group (or will be soon - trust me).

And these are just the changes that have happened in my lifetime.

The changes in the zeitgeist that happened before one's lifetime are far harder to see.

If you really want to see what they are, pick a few random primary sources from Moldbug, cited next to each relevant post.

If conservatism is the democracy of the dead, the only way to find out how they might vote if they could is to actually read what they've written.

Who knows what ideas are going entirely unthought in your head, not for having examined the subject and rejected it, but by simply having never heard it at all.

Wednesday, November 19, 2014

The worst law in London

What does absurd government monomania in the face technological irrelevance look like?

Back in the early years of the 20th century, before computers had become widespread, the word 'calculator' actually referred to people. They would perform large numbers of arithmetic calculations, essentially being a slow and kludgy version of a spreadsheet.

Let's suppose, hypothetically, that being a human computer was a licensed and highly regulated profession in 1920. The government required you to study for years, and prove that you could do hundreds of long division calculations without making a mistake. A whole mystique grew up about 'doing the sums', the examination required to become a calculator. Only licensed calculators were permitted to perform arithmetic operations for more than half an hour a day in a commercial setting

Then IBM popularises the computer, and  Richard Mattessich invents the spreadsheet, and it becomes totally clear to absolutely everybody that 'doing the sums' is completely worthless as a skill set. Not only is keeping the current regulation raising costs by a lot, but it's producing huge deadweight loss from all the people devoting years of their life to studying something that's now completely redundant.

What do you think the response of the government and the public would be once it became apparent that the new technology was cheap and easily available? Immediate repeal of the absurd current regime? Outcry and anger at the horrendous government-mandated inefficiency?

Ha! Not likely,

I suspect the old regime would trundle merrily along, and the New York Times would write philosophically-minded pieces extolling the virtues of it.

Because, dear reader, there actually exists regulation exactly this disgraceful - The Knowledge, the required examination for London taxi drivers.

The New York Times Magazine wrote a long piece describing just how much taxi drivers are required to memorise:
"You will need to know: all the streets; housing estates; parks and open spaces; government offices and departments; financial and commercial centres; diplomatic premises; town halls; registry offices; hospitals; places of worship; sports stadiums and leisure centres; airline offices; stations; hotels; clubs; theatres; cinemas; museums; art galleries; schools; colleges and universities; police stations and headquarters buildings; civil, criminal and coroner’s courts; prisons; and places of interest to tourists.
 Test-takers have been asked to name the whereabouts of flower stands, of laundromats, of commemorative plaques. One taxi driver told me that he was asked the location of a statue, just a foot tall, depicting two mice sharing a piece of cheese. It’s on the facade of a building in Philpot Lane, on the corner of Eastcheap, not far from London Bridge.
What, in the name of all that is holy, is the purpose of making it a legal requirement of driving a taxi that you can name the location of a foot-tall statue of two mice that exists somewhere in London?

In the first place, the demand for finding the location of a statue like this from your taxi driver is zero. A precisely estimated zero, as the statisticians say. The revenues side of the ledger is a donut. It is literally inconceivable that the location of this statue has been the subject of a legitimate question towards a London taxi driver in the history of the entire profession. The only benefit is rent-seeking and limiting the size of the taxi industry. So why not just make them memorise the Roman Emperors in chronological order, or the full text of War and Peace? It would serve just as much purpose.

Not only is there no value to your taxi driver knowing this, but if I type in 'statue of two mice in London' into Google, the first image lists the location as 'Philpot Lane'. (The only sites that come up, ironically, are ones referencing the damn test, suggesting just how pointless this knowledge is). The internet has made memorising this kind of trivia, for all possible sets of London trivia, irredeemably useless.

Everything a taxi driver needs to know has been replaced by a smartphone. Everything. Which is why every man and his dog can drive Uber around just fine.

So what threadbare arguments does the NYT offer when, three quarters of the way through the article, it finally gets around to discussing the question of whether this damn test is worth anything?
Taxi drivers counter such claims by pointing out that black cabs have triumphed in staged races against cars using GPS, or as the British call it, Sat-Nav. Cabbies contend that in dense and dynamic urban terrain like London’s, the brain of a cabby is a superior navigation tool — that Sat-Nav doesn’t know about the construction that has sprung up on Regent Street, and that a driver who is hailed in heavily-trafficked Piccadilly Circus doesn’t have time to enter an address and wait for his dashboard-mounted robot to tell him where to steer his car.
Okay, I'll bite. They beat them in staged races by... how much? One minute? Maybe two? Perhaps 60 or 70% of the time? And the value of this time-saving is what, exactly? How does it compare to the extra time the person waited trying to hail a cab because of the artificial limit on the number of taxis?

It seems that New York Times writers are not required to distinguish between statements like 'the revenue side of the income statement here has literally no items on it' and the statement 'this is a positive NPV project that should be invested in'. Disproving the first statement is sufficient to establish the truth of the second. Look, there's a benefit! Really! See, it shows it must be a good idea to do the project.

Perhaps sensing the unpersuasive ring of this argument to anyone who's ever ridden in an Uber and found it cost 40% of the price, we then get another tack:
Ultimately, the case to make for the Knowledge may not be practical-economic (the Knowledge works better than Sat-Nav), or moral-political (the little man must be protected against rapacious global capitalism), but philosophical, spiritual, sentimental: The Knowledge should be maintained because it is good for London’s soul, and for the souls of Londoners. 
Well, in that case!

But riddle me this - how, exactly, can I tell whether this egregious rent-seeking and artificial deadweight loss monopoly is good for London's soul? 
The Knowledge stands for, well, knowledge — for the Enlightenment ideal of encyclopedic learning, for the humanist notion that diligent intellectual endeavor is ennobling, an end in itself. 
'Enlightenment'. You keep using that word, I do not think it means what you think it means.

Learning is definitely good. Government-mandated learning, especially when used as part of banning the consensual commercial activity of many individuals, is a wholly separate matter.

Just ask someone from the Enlightenment, like John Stuart Mill:
But, without dwelling upon supposititious cases, there are, in our own day, gross usurpations upon the liberty of private life actually practised, and still greater ones threatened with some expectation of success, and opinions propounded which assert an unlimited right in the public not only to prohibit by law everything which it thinks wrong, but in order to get at what it thinks wrong, to prohibit any number of things which it admits to be innocent.
Like, for instance, driving a cab without studying for years to satisfy a ludicrous exam requirement. 

But it's not just the higher taxi fees and difficulty getting a cab at the wrong time of night that make up the real tragedy here. What's the human toll of making every potential taxi driver learn this kind of nonsense, regardless of whether they ultimately succeed?
McCabe had spent the last three years of his life thinking about London’s roads and landmarks, and how to navigate between them. In the process, he had logged more than 50,000 miles on motorbike and on foot, the equivalent of two circumnavigations of the Earth, nearly all within inner London’s dozen boroughs and the City of London financial district. 
 It was now 37 months since he’d paid the £525 enrollment fee to sign on for the test and appearances. “The closer you get, the wearier you are, and the worse you want it,” McCabe said. “You’re carrying all this baggage. Your stress. Worrying about your savings.” McCabe said that he’d spent in excess of £200,000 on the Knowledge, if you factored in his loss of earnings from not working. “I want to be out working again before my kids are at the age where someone will ask: ‘What does your daddy do?’ Right now, they know me as Daddy who drives a motorbike and is always looking at a map. They don’t know me from my past, when I had a business and guys working for me. You want your life back.”
Apparently this must be a strong case of the false consensus effect, because reading this paragraph filled me with furious rage, but the NYT writes about it as one of those quaint things they do in old Blighty.

In the end, McCabe gets his license, so it's all a happy story!

He does not, however, get the three years of his life and £200,000 back.

How on earth do the parasites who run the testing and administration of this abomination justify all this to themselves? How do they explain their role in this shameful waste of money and fleeting human years, the restrictions on free and informed commerce, the ongoing fleecing of consumers, and the massive, groaning, hulking, deadweight loss of this monstrous crime against economic sense and liberty?

They must be either extraordinarily intellectually incurious, morally bankrupt, or both.

As the Russians are fond of saying, how can you not be ashamed?

Friday, November 7, 2014

They're all IQ tests, you just didn't know it

Here's one to file under the category of 'things that may have been obvious to most well-adjusted people, but were at least a little bit surprising to me'.

Many people do not react particularly positively when you tell them what their IQ is, particularly when this information is unsolicited.

Not in the sense of 'I think you're an idiot', or 'you seem very clever'. Broad statements about intelligence, even uncomplimentary ones, are fairly easy to laugh off. If you think someone's a fool, that's just, like, your opinion, man.

What's harder to laugh off is when you put an actual number to their IQ.

Having done this a couple of times now, the first thing you realise is that people are usually surprised that you can do this at all. IQ is viewed as something mysterious, requiring an arcane set of particular tasks like pattern spotting in specially designed pictures, which only trained professionals can ascertain.

The reality is far simpler. Here's the basic cookbook:

1. Take a person's score on any sufficiently cognitively loaded task = X

2. Convert their score to normalised score in the population (i.e. calculate how many standard deviations above or below the mean they are, turning their score into a standard normal distribution). Subtract off the mean score on the test, and divide by the standard deviation of scores on the test. Y = [ X - E(X) ] / [ σ(X)]

3. Convert the standard normal to an IQ score by multiplying the standard normal by 15 and adding 100:
IQ = 100 + 15*Y

That's it.

Because that's all IQ really is - a normal distribution of intelligence with a mean of 100 and a standard deviation of 15.

Okay, but how do you find out a person's score on a large-sample, sufficiently cognitively-loaded task?

Simple - ask them 'what did you get on the SAT?'. Most people will pretty happily tell you this, too.

The SAT pretty much fits all the criteria. It's cognitively demanding, participants were definitely trying their best, and we have tons of data on it. Distributional information is easy to come by - here, for instance. 

You can take their score and convert it to a standard normal as above - for the composite score, the mean is 1497 and the standard deviation is 322. Alternatively you can use the percentile information they give you in the link above and convert that to a standard normal using the NORM.INV function in excel. At least for the people I looked at, the answers only differed by a few IQ points anyway. On the one hand, this takes into account the possibly fat-tailed nature of the distribution, which is good. On the other hand, you're only getting percentiles rounded to a whole number of percent, which is lame. So it's probably a wash.

And from there, you know someone's IQ.

Not only that, but this procedure can be used to answer a number of the classic objections to this kind of thing.

Q1: But I didn't study for it! If I studied, I'm sure I'd have done way better.

A1: Good point. Fortunately, we can estimate how big this effect might be. Researchers have formed estimates of how much test preparation boosts SAT scores after controlling for selection effects. For instance:
When researchers have estimated the effect of commercial test preparation programs on the SAT while taking the above factors into account, the effect of commercial test preparation has appeared relatively small. A comprehensive 1999 study by Don Powers and Don Rock published in the Journal of Educational Measurement estimated a coaching effect on the math section somewhere between 13 and 18 points, and an effect on the verbal section between 6 and 12 points. Powers and Rock concluded that the combined effect of coaching on the SAT I is between 21 and 34 points. Similarly, extensive metanalyses conducted by Betsy Jane Becker in 1990 and by Nan Laird in 1983 found that the typical effect of commercial preparatory courses on the SAT was in the range of 9-25 points on the verbal section, and 15-25 points on the math section. 
So you can optimistically add 50 points onto your score and recalculate. I suspect it will make less difference than you think. If you want a back of the envelope calculation, 50 points is 50/322 = 0.16 standard deviations, or 2.3 IQ points.

Q2: Not everyone in the population takes the SAT, as it's mainly college-bound students, who are considerably smarter than the rest of the population. Your calculations don't take this into account, because they're percentile ranks of SAT takers, not the general population. Surely this fact alone makes me much smarter, right?

A2: Well, sort of. If you're smart enough to think of this objection, paradoxically it probably doesn't make much difference in your case - it has more of an effect for people at the lower IQ end of the scale. The bigger point though, is that this bias is fairly easy to roughly quantify. According to the BLS, 65.9% of high school graduates went on to college. To make things simple, let's add a few assumptions (feel free to complicate them later, I doubt it will change things very much). First, let's assume that everyone who went on to college took the SAT. Second, let's assume that there's a rank ordering of intelligence between college and non-college - the non-SAT cohort is assumed to be uniformly dumber than the SAT cohort, so the dumbest SAT test taker is one place ahead of the smartest non-SAT taker.

So let's say that I'm in the 95th percentile of the SAT distribution. We can use the above fact to work out my percentile in the total population, given I'm assumed to have beaten 100% of the non-SAT population and 95% of the SAT population
Pctile (true) = 0.341 + 0.95*0.659 = 0.967

And from there, we convert to standard normals and IQ. In this example, the 95th percentile is 1.645 standard deviations above the mean, giving an IQ of 125. The 96.7th percentile is 1.839 standard deviations above the mean, or an IQ of 128. A surprisingly small effect, no?

For someone who scored in the 40th percentile of the SAT, however, it moves them from 96 to 104. So still not huge. But the further you go down, the bigger it becomes. Effectively you're taking a weighted average of 100% and whatever your current percentile is, and that makes less difference when your current one is already close to 100.

Of course, the reality is that if someone is offering these objections after you've told them their IQ, chances are they're not really interested in finding out an unbiased estimate of their intelligence, they just want to feel smarter than the number you told them. Perhaps it's better to not offer the ripostes I describe.

Scratch that, perhaps it's better to not offer any unsolicited IQ estimates at all. 

Scratch that, it's almost assuredly better to not offer them. 

But it can be fun if you've judged your audience well and you, like me, occasionally enjoy poking people you know well, particularly if you're confident the person is smart enough that the number won't sound too insulting.

Of course, readers of this august periodical will be both a) entirely comfortable with seeing reality as it is, and thus would nearly all be pleased to get an unbiased estimate of their IQ, and b) are all whip-smart anyway, so the news could only be good regardless.

If that's not the case... well, let's just say that we can paraphrase Eliezer Yudkowsky's advice to 'shut up and multiply', in this context instead as rather 'multiply, but shut up about it'.

The strange thing is that even though people clearly are uncomfortable having their IQ thrown around, they're quite willing to tell you their SAT score, because everybody knows it's just a meaningless test that doesn't measure anything. Until you point out what you can measure with it. 

I strongly suspect that if SAT scores were given as IQ points, people would demand that the whole thing be scrapped. On the other hand, the people liable to get furious were probably not that intelligent anyway, adding further weight to the idea that there might be something to all this after all.

Sunday, November 2, 2014

On Being Sensible

There comes a point in one’s life where one surrenders to the lure of the practical, rather than the romantic. Bit by bit, the arguments for whimsical and aesthetic considerations make way for the fact that it is generally better to simply have one’s affairs in order. I suppose this is part of what maturity means – the extension of one’s planning horizon, so that the present value of sensible choices outweigh the desire to do things merely for the je ne sais quois of seeing something new.

For those of us of a mostly sensible bent, the appeal of solid, practical decisions doesn’t need much extra boosting. But even among such as I, there is still romance, in the broad sense. It just shows up in unexpected places.

While I don’t know exactly when this shift towards sensibility occurred (or even if it had a particular turning point), I do know one of the marks of its arrival.

The clearest indicator, at least to me, is the choice of which seat to choose on the aeroplane.

At some point, the desire to be able to easily get to and from the bathroom becomes the thing one values in this microcosm of life’s choices. Stepping over people is a pain, not being able to pee when one wants to is a pain, waking up people who fell asleep at inopportune times is definitely a pain, especially for the introverted. Life is just easier when you don’t have to worry about these things.

And yet, sometimes an overbooked flight forces you into a window seat, and you remember when you used to pick the window to watch the world beneath. You gaze out into the silvery moonlight, with wisps of clouds floating below you. Tiny patches of criss-crossing light mark the small towns far distant, defying the sea of darkness. The steady glow appear as lichen, growing in odd patterns along the grooves of a rock in an otherwise barren desert.

How many generations of your ancestors lived and died without seeing a sight so glorious?

How many would trade this for slightly more convenient bathroom access?

It is worth noting that this tradeoff does not need to be explained to small children. They instinctively get what’s amazing about watching the world below at takeoff and landing.


Particularly for those of us whose affairs are mostly in order, it is worth being occasionally reminded of the lesson.

Tuesday, October 21, 2014

The Various Ironies of Gough Whitlam

Former Australian Prime Minister Gough Whitlam died recently, at age 98. Predictable hagiographies followed, with cringe-inducing link titles like 'Gough Whitlam a Martyr and a Hero'. This causes right-thinking people to be torn between the polite and worthy tradition of not speaking ill of the recently-deceased, and a mildly grating feeling that the hagiographers write the narratives when this happens. Obituaries are hard to do well, that's for sure, and most don't even really try.

Say whatever else you will about Gough Whitlam, but he was a transformative Prime Minister. Unfortunately, the balance of this transformation was decidedly negative. To his credit, Whitlam enacted some truly good policies, most notably getting rid of the draft, and cutting tarriffs. He also brought in some others that were probably inevitable, like no-fault divorce and recognition of China. He also had some disastrous ones. The Racial Discrimination Act was probably his most poisonous legacy, most recently in the news for being part of the trashing of free speech in the prosecution of Andrew Bolt. Getting rid of university fees almost certainly contributed to the permanent underfunding and subsequent underperformance of Australian universities to this day. He also cut off Rhodesia (leading it to the brilliant sunlit uplands it's in today), and rewarded the buffoonish Lionel Murphy for his bizarre raids on ASIO offices (which tarnished Australia's reputation as a serious state in intelligence matters) by appointing him to the High Court (where he was predictably and comically awful).

But the big irony of the Whitlam years involves the Liberal Party. They struggled so mightily to unseat him, including blocking the funding of government to provoke a constitutional crisis. Blocking supply, I might add, was something that the Libs attract an oddly small amount of criticism for, given its role in the whole affair. Whitlam was famously dismissed by Governor-General John Kerr (who became the boogie-man to the Labor party faithful ever since). Whitlam was also then subsequently voted out by a huge margin in the ensuing elections (a fact that Whitlam fans never seem to discuss very much, since it doesn't fit the narrative very well).

So the Liberals finally won their big victory over Whitlam! And what was their big reward?

Eight years of Malcolm Bloody Fraser, the most disappointing Liberal Prime Minister ever, and one of the worst overall (giving Gough a red hot go for that title).

If the election is between Fraser and Whitlam, honestly, why even bother? It's like the David Cameron v Gordon Brown election - as Simon and Garfunkel said, every way you look at it, you lose.

Thankfully, conservatives eventually had something to cheer for when Fraser was kicked out and Australia finally got some sensible and important economic reforms, coming from... Labor Prime Ministers Bob Hawke and Paul Keating! The former was excellent, the latter was pretty decent too (and superb as Hawke's treasurer). Ex-post, is there a single member of the Liberal Party today (excluding the braindead and the hyper-partisan) who, if sent back in time to 1983 but knowing what they do now, would actually vote for Fraser over Hawke?

And yet Whitlam is the 'hero and the martyr'. Hawke plays second fiddle in Labor Party folklore, despite being excellent in ways that were of mostly bipartisan benefit (floating the dollar, cutting inflation, and other instances of important micro-economic reform).

Yeah, I don't get it either.

Monday, October 20, 2014

More Gold

It always warms my heart when the mere title of an essay makes me laugh. Herein, the estimable Theodore Dalrymple, with 'Your Dad is Not Hitler'. His other recent essay, 'A Sense of Others' is also fantastic.

Honestly, if Taki's Magazine had Moldbug and Heartiste writing for it, one would scarcely need to go anywhere else.

Sunday, October 19, 2014

Yes, we are still on for the thing tonight, just like we said, god dammit.

Continuing my descent into old fogey-ness, I seem to have encountered another shift in the zeitgeist that marks off my age. The first one was the enormous increase in the number of text messages sent by the average teenager. But this was something that one mostly would only see if one actually has a teenager around the house. Since this doesn't apply to me, I only find out about it in odd magazine articles.

But there is another trend that I have had cause to experience firsthand - the proliferation in confirmatory text messages over every social arrangement.

Up until recently, my general presumption was that things worked as follows:

-You and person X would agree to do activity Y at time Z.
-If one of you couldn't make it, you would inform the other ahead of time.
-Absent that, it is assumed that the arrangements stand and you both turn up at time Z.

You, like me, might presume that this is how things still work, yes?

You, like me, might end up being rather surprised.

These days, a lot of people, particularly young people, seem to have decided collectively that they're switching from an opt-out system of arrangements to an opt-in one. In other words, plans to do things in two days time are merely a suggestion, a vague agreement-in-principle. If you actually intend to follow through, you have to confirm this.

I found this when I'd start getting messages asking if we were still on for what I considered agreed-upon plans. I used to respond with 'of course' or something like that, wondering vaguely why this was now the thing that people did, but dismissing it as evidence of their neediness or insecurity. Confirming to them would seem pointless, but not a big deal.

I remember complaining to a friend, and saying that it was refreshing to find people who didn't need this. I was meeting someone new for coffee that evening, and was glad that we hadn't done the obligatory text message dance, which seemed like a good sign. That is, until she didn't actually turn up. Apparently she had decided that not receiving a confirmation was an indication that things were canceled, so much so that she apparently hadn't bothered to message me to check.

To paraphrase Frank Costanza, as I rained abusive text messages on her, I realized there had to be another way. After my rage subsided, it became pretty clear that my attempts to fight a rearguard action against the culture were as doomed as the 50's protests against rock and roll. So I now suck it up and send confirmatory messages. Sometimes one still isn't enough - I've sent a confirmation the night before, only to get another query confirming things an hour before. Who are these people, and what on earth is wrong with them?

I think the reality is that people have become so flaky that this is actually the more efficient social arrangement. When enough people become sufficiently inconsiderate that they just cancel all the time at the last minute, confirmations are actually time-saving. They're only a net drain when the probability of last minute cancellations is politely low, at which point they're a nuisance. This was what I assumed was the case, but apparently not. The real shift will have arrived when cancelling is so common that it's not even considered that impolite. Once again, I'm pretty sure this is a generational thing.

If narcissism and self-centredness are the psychological traits of our age, then flakiness is merely the natural result. Everyone else's time is less valuable than mine (one reasons), so what difference does it make if I change plans on someone at the last minute? Actually, it's probably worse than that - the median reasoning (such as it is) is probably closer to 'I have something better on, or can't be bothered. Ergo, I won't go'. To that extent, expecting confirmatory text messages at least indicates an ability to escape from pure solipsism and anticipate everyone else's self-centredness too. Which, at the margin, I guess is a good thing, even if the need for such anticipation is ultimately depressing.

Plus I just hate sending zillions of text messages, which annoys me too. Why? Same underlying reason.

Monday, October 6, 2014

Crazy is not a hypothesis

One of the criticisms I sometimes hear of behavioral finance, mostly from the rational crowd, is that one is just showing that 'people are crazy' or 'people are stupid'. This is always said dismissively, as if such an observation were trivially true and thus unworthy of observation or elaboration.

The first indication that this is a vastly overblown criticism is given by the fact that, despite the claimed triviality and obviousness of people's stupidity and craziness, these traits don't seem to find their way into that many models - the agents in those models are all rational, you see.

Well, actually, it's a bit subtler than that. Stupid agents have actually been in models for quite a while now, most notably in models that include noise traders, trading on false beliefs or for wholly idiosyncratic reasons.

But agents who could be described as 'crazy' are harder to find - acting in completely counterproductive or irrational ways given a set of preferences and information. So why is that?

The reason, ultimately, is that 'crazy' is usually not a useful hypothesis. It's a blanket name given to a set of behaviors that falls outside of what could be considered rational behavior, or even partially rational (such as kludgy rules of thumb or naive reinforcement learning).

And the reason you know that crazy isn't a useful hypothesis is that it tells you very little about how someone will act, other than to specify what they won't do. How would you go about modeling the behavior of someone who was truly crazy? Maybe you could say they act at random (in which case things look like the noise traders that we labelled as stupid). But are you really sure that their behavior is random? How sure are you that it's not actually predictable in ways you haven't figured out? It seems pretty unlikely that there are large fractions of traders who are in a bona fide need of institutionalisation in a sanitorium, if for no other reason than someone who was really bonkers would (hopefully) struggle to get a job at the JP Morgan trading desk or acquire enough millions of dollars to move financial markets.

The whole point of behavioral economics (and abnormal psychology before it) is to figure out how people are crazy. When someone is doing something you don't understand, you can either view it as mysterious and just say that they went mad, or you can try to figure out what's driving the behavior. But madness is an abdication of explanation.

Good psychiatry reduces the mystery of madness to specific pathologies - bipolar disorder, psychopathy, depression, autism, what have you. 'Madness' functions as the residual claimant, thankfully getting smaller each year.

Good behavioral finance ultimately strives at similar ends - maybe people are overconfident, maybe they use mental accounting, maybe they exhibit the disposition effect. These are things we can model. These things we can understand, and finally cleave the positive from the normative - if rational finance is a great description of what people should do but a lousy description of what they do do, then let's also try to figure out what people are actually doing, while still preaching the lessons we formulated from the rational models.

To say that behavioral finance is just 'people acting crazy' is somewhat like saying that all of economics can be reduced to the statement 'people respond to incentives'.  In a trivial sense, it may not be far from the truth. But that statement alone doesn't tell you very much about what to expect, as the whole science is understanding the how and the why of incentives in different situations - all the hard work is still to be done, in other words.

It's also worth remembering this in real life situations - when someone you know seems to be acting crazily, it's possible they have an unusual form of mental illness as yet unknown to you, but it's also possible that you simply have inadequate models of their preferences and decision-making processes. Usually, I'd bet on the latter.

Thursday, September 25, 2014

A thing I did not know until recently

The word 'se'nnight'. It's an archaic word for 'week', being a contraction of 'seven night(s)'. The most interesting thing is that it makes it immediately clear where 'fortnight' comes from, being a similar contraction of 'fourteen night(s)'. The more you know.

Via the inimitable Mark Steyn.

Tuesday, September 23, 2014

On the dissolving of political bands and the causes impelling separation

Well, Scottish independence has come and gone, thank God. The list of grievances being cited was pathetic enough to make even the complaints of the American colonists (already laughably overblown) seem like the accounts of survivors from North Korean prison camps.

But one thing this whole debacle really illustrated is the following: very few people these days think in a principled way about secession. When, if ever, do a group of people have a right to secede from a country? Do they even need legitimate grievances? How many of them need to agree, and by what margin?

This is certainly true in America. What are the two historical events that most people in this country agree on? Firstly, that the American revolution was a jolly good thing and entirely appropriate. And secondly, that the civil war was fortunately won by the North, whose cause was ultimately just (this is probably still somewhat disputed in the South today, but I think it's probably broadly agreed on overall).

Ponder, however, the surprising difficulty in reconciling those two positions in a principled manner. For some thoughts on the justification for the Cofederacy, meet Raphael Semmes, a Captain of the Confederate States Navy. Have a read of how an actual member of the Confederacy justifies the South's position. It's all in the first couple of chapters of his book, 'Memoirs of Service Afloat', which Gutenberg has for free here.

If you're too lazy to read the original, his argument is quite simple. Firstly, he argues that the same rights that gave the states the ability to join the union gave them the right to leave - they were separate political entities capable of their own decisions, a status that predated the union. Second, he argues that the people of the North and the people of the South are fundamentally dissimilar in attitude and culture. And finally, that the North had been oppressing the South. over the years, and the South simply wanted out.

Now, you may consider these arguments persuasive or unpersuasive. But before you decide, it is worth comparing them to the arguments that the American Colonists claimed as their justification for seceding from Britain. Semmes' argument, if you boil it down, essentially says that we claim the same right to secede from the Union as the thirteen colonies claimed as their right to secede from Britain.

Perhaps slavery is the trump card, the elimination of which (presuming for a moment that this was the sole rationale for the war from the Northern perspective, a far from obvious point) had such moral force that it overwhelmed all the other arguments. But without this logical Deus Ex Machina, it is quite challenging to come up with a consistent set of principles under which the colonies independence was was justified but the South's was not. It's not impossible, but it's not straightforward either. And when you're done with that, be sure to reconcile it with your thoughts on independence in Kosovo, Catalonia, Chechnya, the Kurds in Turkey, ISIS in northern Iraq and other modern examples.

Or put it this way - hypothetically, had the South agreed to abolish slavery, and then done so in a way that meant reinstating it was impossible, but afterwards still insisted on secession, would their cause have been justified then?

I really don't know what most Americans would say to that one.

I don't think Americans are alone in this unthinking attitude to the question.

You saw this exactly on display in the Scottish fiasco. Most political unions don't contain explicit descriptions of how they can be dissolved. This goes doubly so for countries like Britain, which don't have a formal constitution at all.

What this means is that it's entirely unclear when or which bits of it can break off. Scotland at least had the virtue of being a polity with its own history, own accent, own traditions and so forth. People know who 'The Scots' are, so you don't need to explain why they should be considered their own entity. But what if Glasgow decided that, notwithstanding the opinion of the rest of Scotland, they wanted to secede from the UK themselves. Could they do it? Population-wise, there's as many people living in Glasgow (596,000) as Montenegro (625,000)or Luxembourg (549,000). And if Glasgow, what about Inverness (72,000)?

And not only that, but the lack of formality was on display by the method of deciding the question. A single referendum, with the Scots as the only people being consulted. Moreover, for a decision this momentous, you might assume that you need some kind of supermajority or something. But since we can't specify that kind of thing ahead of time, the default assumption is that a simple majority will do, one time. If 50.01% of Scots want to leave, then out they go. Bad luck for the remaining 49.99%. Bad luck for any Scots yet to come who might have preferred the union. I suspect that if Cameron had thought he might lose, he might have asked for a higher standard. But a) how would he justify that higher number, and b) if he did, would he then be bound by the outcome?

For a lot of major political decisions, the public never gets consulted at all. It's not clear if the British will get a vote on whether to stay in the EU. They did get a referendum in 1975 to decide whether to join the European Economic Community (which later became the EU) but you'd be a bold man to claim that that signing up to the EEC meant a full knowledge of the leviathan that the EU would later become. In November 2012, support for leaving the EU was 56%. Under the one-time, one-vote rule, that could have been enough to get them out. One might say that holding this vote would force exclusion from the EU for future Brits, who might not be able to change their minds. Then again, one could equally say that the vote in 1975 forced inclusion on lots of modern Brits who now also can't change their mind.

I don't pretend there's easy answers to any of these questions. The libertarians would say every individual has the right to secede from any group, which is a consistent, if difficult to implement position.

But the whole Scotland thing has shown is that avoiding thinking about these kinds of questions doesn't make them go away. They're going to come up periodically, and you just get incoherent answers by not having any contingency plans.

Everyone goes into marriages thinking they'll last forever. And yet we still think it prudent to have divorce procedures well known in advance.

Since I'm mostly a fan of formalism, I think countries would benefit from the same arrangements.

Sunday, September 14, 2014

Of Behavioural Red Flags and Unfunded Campaign Promises

One of the key meta-points of the rationality crowd is that one needs to explicitly think about problem-solving, because one's intuitions will frequently be wrong. In general, sophistication about biases is crucially important - awareness of the possibility that one might be wrong, and being able to spot when this might be occurring. If you don't have that, you'll keep making the same mistakes over and over, because you won't consider that you might have screwed up last time. Instead, the world will just seem confusing or unfair, as unexpected (to you) things keep happening over and over.

For me, there are a number of red flags I have that indicate that I might be screwing something up. They're not ironclad indications of mistakes, but they're nearly always cause to consider problems more carefully.

The first red flag is time-inconsistent preferences (see here and here). When you find yourself repeatedly switching back and forth between preferring X and preferring Not X, this is usually a sign that you're screwing something up. If you go back and forth once or twice, maybe you can write that off as learning  due to new information. But if you keep changing your mind over and over, that's harder to explain. At least in my case, it's typically been due to some form of the hot-cold empathy gap - you make different decisions in cold, rational, calculating states versus hot, emotionally charged states, but in both types of state you fail to forecast how your views will predictably change when you revert back to the previous state. I struggle to think of examples of when repeatedly changing your mind back and forth over something is not in fact an indication of faulty reasoning of some form.

The second red flag is wishing for less information. This isn't always irrational - if you've only got one week to live, it might be entirely sensible to prefer to not find out that your husband or wife cheated on you 40 years ago, and just enjoy the last week in peace. (People tempted to make confessions to those on their deathbed might bear in mind that this is probably actually a selfish act, compounding what was likely an earlier selfish act). But for the most part, wishing to not find something out seems suspicious. Burying one's head in the sand is rarely the best strategy for anything, and the desire to do so seems to be connected to a form of cognitive dissonance - the ego wanting to protect the self-image, rather than admit to the possibility of a mistake. Better advice is to embrace Eugene Gendlin
What is true is already so.
Owning up to it doesn't make it worse.
Not being open about it doesn't make it go away.
And because it's true, it is what is there to be interacted with.
Anything untrue isn't there to be lived.
People can stand what is true,
for they are already enduring it.
The third red flag is persistent deviations between stated and revealed preference (see, for instance, here and here). This is what happens when you say you want X and are willing to pay for it at the current price, and X is within your budget set, and you keep not purchasing X. The stated preference for liking X is belied by the revealed preference to not actually buy it. Being in the budget set is key - if one has a stated preference for sleeping with Scarlett Johannson but is not doing so, this is unlikely to be violating any axioms of expected utility theory, whatever else it may reveal.

Conflicts between stated and revealed preference may be resolved in one of two ways. As I've discussed before, for a long time I had a persistent conflict when it came to learning Spanish. I kept saying I wanted to learn it, and would try half-heartedly with teach yourself Spanish MP3s, but would pretty soon drift off and stop doing it.

This inconsistency can be resolved one of two ways. Firstly, the stated preference could be correct, and I have a self-control problem: Spanish would actually be fun to learn, but due to laziness and procrastination I kept putting it off for more instantly gratifying things. Secondly, the revealed preference could be correct: learning Spanish isn't actually fun for me, which is why I don't persist in it, and the stated preference just means that I like the idea of learning Spanish, probably out of misguided romantic notions of what it will comprise.

Having tried and failed at least twice (see: time-inconsistent preferences), I decided that the second one was true - I actually didn't want to learn Spanish. Of course, time-inconsistency being what it is, every few years it seems like a good idea to do it, and I have to remind myself of why I gave up last time.

Being in the middle of one such bout of mental backsliding recently, I was pondering why the idea of learning another language kept holding appeal to me, even after thinking about the problem as long as I had. I think it comes from the subtle aspect of what revealed preference is, this time repeated with emphasis on the appropriate section:
when you say you want X and are willing to pay for it at the current price, and X is within your budget set, and you keep not purchasing X
Nearly everything comes down to actual willingness to pay. Sure, it would be great to know Spanish. Does that mean it is great to learn Spanish? Probably not. One thinks only of the final end state of knowledge, not of the process of sitting in the car trying to think of the appropriate Spanish phrase for whatever the nice-sounding American man is saying, and worrying if the mental distraction is increasing one's risk of accidents.

Of course, it's in the nature of human beings to resist acknowledging opportunity cost. There's got to be a way to make it work!

And it occurred to me that straight expressions of a desire to do something have a lot in common with unfunded campaign promises. I'll learn the piano! I'll start a blog! I'll read more Russian literature!

These things all take time. If your life has lots of idle hours in it, such as if you've recently been laid off, then great, you can take up new hobbies with gay abandon.

But if your week is more or less filled with stuff already, saying you want to start some new ongoing task is pointless and unwise unless you're willing to specify what you're going to give up to make it happen. There are only so many hours in the week. If you want to spend four of them learning piano, which current activities that you enjoy are you willing to forego? Two dinners with friends? Spending Saturday morning with your kid? Half a week's worth of watching TV on the couch with your boyfriend? What?

If you don't specify exactly what you're willing to give up, you're in the exact same position as politicians promising grand new spending schemes without specifying how they're going to pay for them. And this goes doubly so for ongoing commitments. Starting to listen to the first teach-yourself-Spanish MP3, without figuring out how you're going to make time for the remaining 89 in the series, is just the same as deciding you want to build a high speed rail from LA to San Francisco, and constructing a 144 mile section between Madera and Bakersfield without figuring out how, or if, you're going to be able to build the whole thing.

And like those politicians you scorn, you'll find yourself tempted to offer the same two siren-song mental justifications that get trotted out for irresponsible programs everywhere.

The first of the sirens is that you'll pay for the program by eliminating waste and duplication elsewhere. Doubt not that your life, much like the wretched DMV, is full of plenty of waste and duplication. But doubt it not as well that this waste and duplication will prove considerably harder to get rid of than you might have bargained for. If your plan for learning Spanish is 'I'll just stop wasting any time on the internet each day'... yeah, you're not going to get very far. Your system 2 desire to learn piano is like Arnie, and your desire to click on that blog is like the California Public Sector Unions - I know who my money's on. The amount of waste you can get rid of is probably not enough to fund very much activity at all. Just like in government.

The second siren is the desire to just run at a budget deficit. The area of deficit that almost always comes up is sleep. I'll just get up and hour earlier and practice the piano! Great - so are you planning to go to bed an hour earlier too? If so, we're back at square one, because something in the night's activities has to be cut. If not, do you really think that your glorious plan to switch from 8 hours a night to 7 hours a night, in perpetuity, is likely to prove feasible (absent long-term chemical assistance) or enjoyable (even with such assistance)? Every time I've tried, the answer has been a resounding 'no'. I say 'every time' advisedly, as this awful proposal manages to seem appealing again and again. You can in fact live on less sleep for extended periods - just ask parents with newborn children. It's also incredibly unpleasant to do so - just ask parents with newborn children. They'll do it because millions of years of evolutionary forces have caused them to feel such overwhelming attachment to their children that the sacrifice is worth it. And you propose to repeat the feat to learn the piano? That may seem like a great idea when you start out for the first night, fresh from a month of good sleeping. It seems like less of a good idea the next morning when your alarm goes off an hour earlier than usual. And I can assure you it almost certainly will not seem like a good idea after a month of being underslept, should you in fact get that far. Iterate forward, and don't start.

The real lesson is to only undertake things that you're actually willing to pay for. If you don't know what you're willing to give up, you don't actually know if you demand something, as opposed to merely want it. Confuse the two at your peril.

Wednesday, September 10, 2014

The limits of expected utility

It is probably not a surprise to most readers of this august periodical to find out that I yield to few people in my appreciation for economic reasoning. Mostly, the alternative to economic reasoning is shonky, shoddy intuitions about the world that make people worse off. Shut up and multiply is nearly always good advice - work out the optimal answer, not what makes you feel good. The alternative is, disturbingly often, more people dying or suffering just so you can feel good about a policy.

But perhaps it may be a surprise to find that I not infrequently end up in arguments with economists about the limits of economic reasoning in personal and ethical situations. There is often a tendency to confuse the 'is' and the 'ought'. We model people as maximising expected utility, usually over simple things like consumption or wealth, because these are powerful tools to help us predict what people will do on a large scale. But for the question of what one ought to do, it is particularly useless to do what some economists do and say, 'Well, I do whatever maximises my utility'. No kidding! So how does that help you decide what's in your utility function? Does it include altruism? If so, to whom and how much? Do you even know? A lot of ethical dilemmas in life come from not knowing how to act, which (if you want to reduce everything to utility terms) you could say is equivalent to not knowing how much utility or disutility something will give you. There's ways to find that out, of course, but those ways mostly aren't economics.

More importantly, this argument tends to sneak in a couple of assumptions that, when brought to the fore, are not nearly as obvious as the economics advice makes them.

Firstly, it's not clear that utility functions are fixed and immutable. This is perhaps less pressing when modeling monopolistic competition among firms, but is probably more first order in one's own life. Could you change your preferences over time so that you eventually got more joy out of helping other people, versus only helping yourself? And if so, should you? It's hard to say. You could think about having a meta-utility function - utility over different forms of utility. For the same amount of pleasure, I'd rather get pleasure from virtue than vice. This isn't in most models, although it probably could be included in some behavioral version of stuff (I suspect it may all just simplify to another utility function in the end). But even to do this requires a set of ethics about what you ought to be doing - you need to specify what behavior is utility-generating behaviour is admirable and what isn't. Philosophers have debated what those ethics should be for a long time, but you'll need to look outside economics to find what they are.

Mostly, people just assume that whatever they like now is good enough. Of course, they're assuming their desires don't raise any particular ethical dilemmas. You can always think about extreme cases, like if someone gains utility over torturing people. Most die-hard economists would probably still not give the torturer the advice to just do what gives them utility. They'd try to find wiggle ways out by saying that they'd get caught, but that just punts the question further down the road - if they won't get caught, does that mean they should do it? You'd probably say either a) try to learn to get a different utility function that gets joy from other things (but what if they can't?), or if they're more honest b) your utility isn't everything - some form of deontology applies, and you just shouldn't torture people for fun simply because you find it enjoyable.

Of course, if you admit that deontology applies, some things are just wrong. It doesn't matter if the total disutility from 3^^^3 dust specks getting in people's eyes is greater, you'd still rather avoid torture. Eliezer Yudkowsky implies that the answer to that question is obvious. How many economists would agree? Fewer than you'd think. I'm probably not among them either, although I don't trust my intuitions here.

But fine, let's leave the hypotheticals to one side, and consider something very simple - should you call your parents more often than you do? For most young people, I'd say the answer is yes, even if you don't enjoy it that much. Partly, it's something you should endeavour to learn to enjoy. Even if this doesn't include enjoying all of the conversation, at least try to enjoy the part of being generous with one's time. Though the bigger argument is ultimately deontological - children have enormous moral obligations to their parents, and the duties of a child in the modern age include continuing to be a support for one's parents, even if you might rather be playing X-Box. If you ask me to reason this from more basic first principles, I will admit there aren't many to offer. Either one accepts the concept of duties or one doesn't.

In the end, one does one's duty not always because one enjoys it, but simply because it is duty. Finding ways to make duty pleasurable for all concerned is enormously important, and will make you more likely to carry it out, but in the end this isn't the only thing at stake. There is more to human life than your own utility, even your utility including preferences for altruism. It would be wonderful if you can do good as a part of maximising your expected utility. Failing that, it would be good to learn to get utility from doing good, perhaps by habit, even if that's not currently in your utility function. Failing that, do good anyway, simply because you ought to.

Sunday, August 24, 2014

Making the living as interesting as the dead

Dinosaurs are endlessly fascinating things. They may be one of the biggest common denominators interest of among young children, both male and female. They’re huge, they’re weirdly shaped, and perhaps most importantly, they don’t exist. You see only the skeleton, and drawings. As a consequence, you’re forced to imagine what they would have been like. This means that they get a romance and curiosity attached to them that seldom attracts to the animals that the world actually has. One wants what one can’t have, after all. What is Jurassic Park, if not a combination of Frankenstein and man’s attempt to recreate the Garden of Eden?

Of course, if dinosaurs actually existed, they’d just be one more animal in the zoo. You can take this one of two ways. Either dinosaurs are overrated, or we should be more interested than we are in things that actually exist. For aesthetic reasons, I prefer the latter choice - one ought to learn to take joy in the merely real. Of course, getting people to see that is easier said than done.

Doubt it not, a giraffe is as bizarre as any dinosaur. One may appreciate this on an intellectual level, but it is hard to view one with quite the same wonder. The most effective way I've seen to demonstrate the point is at the Harvard Museum of Natural History. Firstly, the exhibits move from dinosaurs, to ice age skeletons, and then on to living animals, encouraging the juxtaposition quite naturally.

But most importantly, to get people to be intrigued by modern animals, the most successful trick is to show them not just a giraffe, but a giraffe skeleton. It encourages you to look at a giraffe the way you look at the dinosaurs. And when you do, you realise that it’s comparably tall, wackily elongated, and many of the elements of the skeleton share features with those in the previous rooms. They also show you a real giraffe next to it, completing the imagination picture you had to fill in on your own in the previous cases. But most of the room is filled with skeletons of living species. A rhino or a hippo skeleton could easily be placed in the previous rooms without seeming out of place. If you judge a dinosaur less by its age and more like a child would, as a strange giant animal, the dinosaurs still exist. We just stopped noticing them.

The message is subtle but powerful. You would do well to be less fascinated with dinosaurs, and more fascinated with animals. The dead are intriguing, but so are the living. The latter have the advantage that you can still see them. So afterwards, why not take a trip to the zoo?

Sunday, August 17, 2014

On Memory and Imagination

It recently occurred to me that I have a very poor memory, but not in the standard way that people suspect.

By most metrics, I remember a lot of things. I have entire parts of my brain devoted to song lyrics, which is exactly the kind of odd thing that strikes people as notable precisely because of its triviality. I remember books I've read for a long time, and can usually talk usefully about them to people who've only recently read them. I remember ideas even better, and details of useful examples that illustrate the things I believe.

So for the most part, this qualifies me as having a reasonable memory. But nearly all the things I remember well are to do with words and concepts. This isn't universal – I’m bad at names and birthdays, for instance, but that’s about the only thing that might give it away.

The part I lack, however, is the ability to form mental pictures of what things look like. Yvain wrote about this in the context of imagination.
There was a debate, in the late 1800s, about whether "imagination" was simply a turn of phrase or a real phenomenon. That is, can people actually create images in their minds which they see vividly, or do they simply say "I saw it in my mind" as a metaphor for considering what it looked like?
Upon hearing this, my response was "How the stars was this actually a real debate? Of course we have mental imagery. Anyone who doesn't think we have mental imagery is either such a fanatical Behaviorist that she doubts the evidence of her own senses, or simply insane." Unfortunately, the professor was able to parade a long list of famous people who denied mental imagery, including some leading scientists of the era. And this was all before Behaviorism even existed.
The debate was resolved by Francis Galton, a fascinating man who among other achievements invented eugenics, the "wisdom of crowds", and standard deviation. Galton gave people some very detailed surveys, and found that some people did have mental imagery and others didn't. The ones who did had simply assumed everyone did, and the ones who didn't had simply assumed everyone didn't, to the point of coming up with absurd justifications for why they were lying or misunderstanding the question.
There was a wide spectrum of imaging ability, from about five percent of people with perfect eidetic imagery to three percent of people completely unable to form mental images.
Dr. Berman dubbed this the Typical Mind Fallacy: the human tendency to believe that one's own mental structure can be generalized to apply to everyone else's.
This holds true both for the parts of memory I have, as well as those I lack. My relatively strong ability to remember the written word has a ton of variation. A smart friend of mine remarked years ago that he found it almost impossible to remember much from the novels he's read. I remember thinking at the time that this seemed very tragic. 

For my part, I would score quite low on the ability to form mental images. It's not non-existent - there are images, but they’re hazy, and the details tend to shrink away when you try focus in on them. When I read books, I have only a vague vision of what the people involved look like, or the places where the action is taking place. I would find it very hard to do the job of a writer and keep in my head a consistently detailed image of the physical features of a person’s appearance or the scenery. If I thought hard I could add in enough detail to make it convincing, but no amount of detail would cause me to actually have a clear picture of it myself.

I once saw a fascinating hint of how you might kludge things if you lacked a strong ability to form images and had to write about them anyway. This was when I saw the study of a friend’s mother who writes fiction. Up on a pinboard, she had pictures of the faces of a number of famous people from various angles. It was very much an ‘of course!’ moment. To make sure an image is credible if you can’t form one yourself, describe something in front of you that actually exists. This is the equivalent of painting from a photograph instead of painting a scene entirely in your head. It seems overwhelmingly likely that any painter who can create a detailed imaginary scene is an eidetic imager or close to.

But the bit that goes less noticed is that imagining pictures isn't important just for wholly made up scenery, but for memories too. The source material is still there, but you still need to recreate the scene.

And I find I’m fairly bad at forming mental images even of people I know well. I can remember particular scenes they were in, and certain facial expressions that seem familiar. But I don’t immediately have a crystal clear picture of them in my head. I’ll remember a particular still image, or a collage of them. But I can’t make the picture do arbitrary things like talk, or perform some action. I can’t imagine a different version of them, I can only remember a particular image of them that stuck for some reason.

Part of the reason that this deficit goes almost completely unnoticed is that it doesn't show up in the one situation where you might expect it, namely being bad at recognising people. I’m actually okay at that, even if I can’t always remember their name. When presented with an actual person in front of me, it’s enough to stir up recollections of what they were like, and to fill in the blanks of their appearance. Since I had only a hazy memory of what they looked like anyway, it’s less jarring to see how they've changed, which might cause me to think they were someone else.

So I can remember the faces in front of me, but not the faces that aren't. They’re stored in there, because I know them when I see them. But I can’t recall them at will.

You’d think that this would cause me to anticipate this by taking a lot of photos to preserve the memories. Sometimes it does, but often I’m content to remember the event in terms of events and stories, even if the scene isn't always precise. This is a reasonable tradition in the Holmes household. My parents took long trips around Europe and Asia in their youth, but I think I've seen precisely one photo from the entire time, affectionately referred to as 'the Cat Stevens photo'. But the stories from that time have been recounted many times, particularly among the people who were there. As Papa Holmes put it, when describing his relative lack of photos of his trips – ‘you go places, and you take in the scenery at the time. And you remember it, for a while. And then … you forget’.

In other words, the forgetting is okay, and is actually an important part of the process, the way death is to life. The world you remember was always impermanent anyway. Eventually, even the memory is too.

Tuesday, August 5, 2014

Thought of the Day

Curses on you, all you great problems! Let someone else beat his head against you, someone more stupid. Oh, just to rest there from the interrogator’s mother oaths and the monotonous unwinding of your whole life, from the crash of the prison locks, from the suffocating stuffiness of the cell. Only one life is allotted us, one small, short life! And we have been criminal enough to push ours in front of somebody’s machine guns, or drag it with us, still unsullied, into the dirty rubbish heap of politics. There, in the Altai, it appeared, one could live in the lowest, darkest hut on the edge of the village, next to the forest. And one could go into the woods, not for brushwood and not for mushrooms, but just to go, for no reason, and hug two tree trunks: Dear ones, you’re all I need.
-Aleksandr Solzhenitsyn, The Gulag Archipelago.

Saturday, August 2, 2014

Living on the Grid

There is something deeply appealing about a city built on a road grid. Not just because of my love of order and planning, either. You can arrive there and navigate your way around pretty easily, because most places can be accessed without making more than a couple of turns. I always like that in a place I’m travelling to. Not the fastest route, but the route I'm least likely to screw up.

It also gives rise to the wonderful phenomenon of numbering addresses by block. Growing up in Australia, the assumption that consecutive houses would be two numbers apart if on the same side of the street was one of those things so baked into your way of thinking that if I’d lived a thousand years, it would never have occurred to me to do it differently. I think this is how it always is. Everybody thinks of technological change as making an IPad or something, they rarely look for improvements in something mundane and simple like how addresses are numbered. But in a grid city, you can do much better than consecutive numbering, by making the numbers go up by 100 each block, and just rank order arbitrary numbers within the blocks. Manhattan is the epitome of this. When every street and avenue has a number, ‘312 E 28th St’ tells you exactly where the building is – between 3 and 4 blocks east of the dividing line of 5th Avenue, on 28th street.  If the city had a definite lower left point, you wouldn’t even need the extra knowledge of the dividing Avenue. In Chicago, the numbering tells you one dimension, but the streets themselves have names. So you have to, for instance, know that the downtown streets are named in order of the presidents. Well, except for Jefferson, who’s somewhere else. And that pesky thing that there were two ‘Adams’ presidents within the space of five presidents (it’s Quincy Adams who gets the street, not Adams). Hey, nowhere's perfect.

The knock on grid cities is that they’re boring from a design point of view, but I’m not so sure. From high above the city at night, the lights on Roosevelt Ave stretch out in a line to the far horizon, fading off as if they go on forever. Without contours in the ground, it feels like living in the mathematician's depiction of parallel lines on an idealized infinite plane. Eventually, the lights on the two sides of the street must converge to a single point. Theoretically, this happens only at infinity, but I’d wager that somewhere in Nebraska ought to be far enough.

Wednesday, July 30, 2014

The best poem title ever

Making this a double dose of Hal G.P. Colebatch, if there has been a better poem title than:

'Observing a thong-shod pedestrian's reaction to catching his toe in the ring of a discarded condom'

I certainly haven't come across it.

Time for Malcolm Fraser to repent

"And there is another feeling that is a great consolation in poverty. I believe everyone who has been hard up has experienced it. It is a feeling of relief, almost of pleasure, at knowing yourself at last genuinely down and out. You have talked so often of going to the dogs--and well, here are the dogs, and you have reached them, and you can stand it. It takes off a lot of anxiety,"
-George Orwell, Down and Out in Paris and London 
From a certain progressive standpoint, Zimbabwe, it seems, has at last gone to the dogs.

The Rhodesians would have told you that the dogs arrived years ago, and the rest of the changes were merely being more open about the kennel-like aspects of state.

Of course, this doesn't mean that things can't get worse. When it comes to forecasting the fortunes of countries, as in stockmarkets, picking exactly when the bottom has been reached is a very perilous business. It is always dangerous with basket-case countries to assume that things can't get any worse, because truly awful leaders seem to be uncannily persistent in finding a way. If Zimbabwe is remembered for anything, perhaps it will be for that.

So let's focus on a more stripped-down prediction - that installing Robert Mugabe was a mistake that everyone involved ought to feel intensely ashamed about.

Surely that's been pretty obvious for at least 25 years, right?

Ha! Sometimes it takes a while for things to get so bad that they break through the cognitive dissonance of those that helped create the disaster.

Just ask former Prime Minister of Australia Malcolm Fraser.

In one of the more disgraceful episodes of a mostly worthless (at best) Prime Ministership, Fraser was heavily involved in getting Robert Mugabe installed. As Hal G.P. Colebatch recounts:
Fraser's 1987 biographer Philip Ayres wrote: "The centrality of Fraser's part in the process leading to Zimbabwe's independence is indisputable. All the major African figures involved affirm it."
Tanzanian president Julius Nyerere said he considered Fraser's role "crucial in many parts", and Zambian president Kenneth Kaunda (whose own achievements included making his country a one-party state) called it "vital".
Mugabe is quoted by Ayres: "I got enchanted by (Fraser), we became friends, personal friends ... He's really motivated by a liberal philosophy."
Fraser's role also attracted tributes from Australian diplomats. Duncan Campbell, a former deputy secretary of the Department of Foreign Affairs and Trade, has claimed that Fraser was a "principal architect" of the agreement that installed Mugabe and that "he was largely responsible for pressing Margaret Thatcher to accept it".
Former Australian diplomat and Commonwealth specialist Tony Kevin has also claimed that Fraser "challenged Margaret Thatcher's efforts to stage-manage a moderate political solution".
In an interview in 2000, Fraser showed that he appeared to have learned absolutely nothing from the process. This was just after Mugabe had passed a law allowing white farming assets to be taken without compensation.
JOHN HIGHFIELD: Mr Fraser, what do you make of these goings on in Zimbabwe? After all it was in the late 1970s that you and your friend, Kenneth Kowunda [phonetic], persuaded Mrs Thatcher to come across to your view and give Zimbabwe independence.
MALCOLM FRASER: I find it very hard to understand the disintegration that has, in fact, occurred because I really did believe, and I think many people who knew what was happening in the country believed, that President Mugabe started very well. I can remember speaking with Dennis Norman who was a white farmer in Mugabe's first government, and he spoke very highly of him and spoke very highly of his policies at that time.
...
I'm - you know, what has gone wrong in the last several years I find it very difficult to pin-point, except that economic policies have not worked. He's tried to defy, I think, the international moves of the marketplace which would have reduced investment in Zimbabwe and therefore reduced employment opportunities for Zimbabweans.
By 2000, it had been clear for quite a while that Zimbabawe had been disgracefully managed on a purely economic basis for a long time. When Mugabe was installed, Zimbabwe's GDP per capita of $916 (in current US dollars). By 2000,  its GDP had declined by over 40%, to $535. Have a look at the graph below of the subsequent growth of some nearby countries that were poorer than Zimbabwe in 1980 and see what you think of Fraser's claim that Mugabe 'started very well'. Try putting in Botswana as well (slightly richer in 1980) and the comparison becomes even more dismal, as it towers over Zimbabwe. The most optimistic description is that things hadn't yet gone to hell as late as 1982. Heckuva job, Malcolm and Robbie!


But in some sense, this isn't really the striking point about the Fraser response.

The first bizarre part is Fraser's contemptible obfuscation of referring to a policy of forced, uncompensated confiscation of white farm assets as merely 'economic policy'. Nothing racial here, no siree! See no race, hear no race. Why is that? Why the absurd euphemisms?

The second bizarre part is that, 20 years later, Fraser still finds the events mysterious. Do you think this might be related to the first point, you worthless old fool?

Fraser has to skate around the racism of the Mugabe regime, because given the economic catastrophe that befell the country, this is the only advantage that the initial Mugabe boosters can claim over Smith. Sure, we replaced a system that was lifting Zimbabwe out of poverty with a brutal and corrupt regime that terrorises its citizens. But hey, at least it's not racist, like Smith!

Of course, Smith's racism was mostly of a disparate impact variety. Rhodesia was not South Africa, and the practical restrictions on blacks were far less than under Apartheid. The 1961 constitution had property and education requirements for voting rights, but made no explicit racial prohibitions (although later voting systems did). The outcome was heavily skewed towards whites, obviously, and this was almost certainly the intended effect. But if you think that having a property requirement for voting means that a system is not meaningfully democratic, then Britain in World War I was just another undemocratic oligarchy fighting against other equally undemocratic oligarchies. You also wouldn't want to praise the US founding fathers too highly.

When Hal Colebatch caned Fraser in 2008 for his shameful role in getting Mugabe installed, Fraser's response was pathetic. You will scour in vain for any description by Fraser of racism in anything Mugabe did. You will also scour in vain for any coherent explanation of what exactly was wrong with the Smith regime, except that Smith personally was a real meanie who didn't let Mugabe, who was already fighting a civil war to overthrow the government, visit his young son when he was sick, and when he ultimately died. By all means, let's then give the country to a man who at the time was already famous for running an organisation that cut the noses and lips off blacks who opposed him. Have a look, Malcolm! Have a look, if you can stomach it, and tell me again what a terrible man Ian Smith was.

In the mean time, Fraser clings to a cock-and-bull story that the real issue with Mugabe was when his wife died, and that's when it all went to hell. Great theory! Completely untestable in terms of its main aspects of course. But what about the implication - that nobody could have seen this coming, as the start was so excellent. Seems plausible, no? Except that Smith pretty accurately did predict what was going to happen. Malcolm Fraser continues to express his surprise. Smith expressed no surprise at all. Sadness, yes, but not surprise.

How about, just for a change, you consider the possibility that you got completely suckered by Mugabe, that his moderate image was all a con for your benefit, and that millions of people suffered enormously because of your gullibility. You got played, you silly old fool. You are the muppet in this story, the mark, the rube. 35 years later you still can't see that. Gee, I picked the cup that I'm super sure had the pea under it! And somehow I still lost money, it just doesn't make sense!

So now, let us return to the story I linked at the start. Exactly where have things gotten to recently?
In the harshest official policy on race and land reform in a country that has been close to bankruptcy, the 90-year old autocrat said Wednesday that whites may no longer own any land in Zimbabwe. 
Let us pause and reflect on Malcolm Fraser's shame. We have known for almost 30 years that Fraser bequeathed to Zimbabwe economic and social catastrophe. We have already known of the thousands brutally killed and tortured in Mugabe's prison of a country. We have already known of the increasing hostility towards the dwindling number of remaining whites, even when it was entirely self-defeating from an economic point of view. We have known that Mugabe has long since stopped holding any semblance of free and fair democratic elections, another frequent criticism of Smith.

But finally, we have reached the nadir, from the progressive point of view - at long last, we now have a regime that is actually more racist than Ian Smith's. Smith never imposed any restrictions this draconian on blacks. The fig leaf, absurd though it was all along, is finally stripped away. There is nothing left, absolutely nothing, to recommend this regime over the one it replaced.

Malcolm Fraser never had to face the consequences of his actions. He will live out his days in comfort and peace in a stable and prosperous first world country. The same cannot be said of the citizens of Zimbabwe, both white and black, who had to live with the regime Fraser helped install.

Saturday, July 19, 2014

Snappy responses you weren't hoping for that nonetheless answer the question quite well

In the last few years, unable to hold a list of just four grocery items in my head, I’d begun to fret a bit over my literal state of mind. So to reassure myself that nothing was amiss, just before tackling French I took a cognitive assessment called CNS Vital Signs, recommended by a psychologist friend. The results were anything but reassuring: I scored below average for my age group in nearly all of the categories, notably landing in the bottom 10th percentile on the composite memory test and in the lowest 5 percent on the visual memory test.
All this means that we adults have to work our brains hard to learn a second language. But that may be all the more reason to try, for my failed French quest yielded an unexpected benefit. After a year of struggling with the language, I retook the cognitive assessment, and the results shocked me. My scores had skyrocketed, placing me above average in seven of 10 categories, and average in the other three. My verbal memory score leapt from the bottom half to the 88th — the 88th! — percentile and my visual memory test shot from the bottom 5th percentile to the 50th. Studying a language had been like drinking from a mental fountain of youth.
What might explain such an improvement?
Regression toward the mean.

Monday, July 14, 2014

Lionel Messi and Soccer Equilibrium Outcomes

So another World Cup has come and gone. Enough water had passed under the bridge that I no longer resented Argentina for their dismal performance in 2002 when I wagered on them. I was vaguely hoping for an Argentine win, just because I would have liked to see Lionel Messi win a cup.

'Twas not to be, of course.

A very good starting point for understanding Messi is this excellent post by Nate Silver going through a whole lot of metrics of soccer success and showing that Messi is not only an outlier, he's such an outlier that his data point is visibly distinct from the rest even in simple plots. Like this one:

morris-feature-messi-1
(image credit)

Seriously, go read the whole thing. If you're apt to be swayed by hard data, it's a pretty darn convincing case.

So what happened in the World Cup? Why didn't he seem nearly this dominant when you watched him play?

The popular narrative is that there's some inability to perform under pressure - in the big situations when it really counts, he doesn't come through with the goods. He's a choker, in other words.

This is hard to disprove exactly, but one thing that should give you pause is that with Messi on the team, Barcelona has won two FIFA Club World Cups and three UEFA championships. This at least suggests that the choking hypothesis seems more specific to World Cups.

So one explanation consistent with the choking hypothesis is that the World Cup is much higher stakes than the rest, hence the choking is only visible in that setting. It's possible, and hard to rule out.

But another possibility is that the difference comes from the way that opposing teams play against Messi in each setting.

Remember, a player's performance is an equilibrium outcome. It's determined by how skilfully the person plays that day (which everyone thinks about), but also by how many opposing resources are focused on the person (which very few people think about).

Let's take the limiting case, since it's easiest. Suppose I take a team comprised of Lionel Messi and ten guys from a really good high school team, and pit them against a mid-range club team. My guess is that Messi wouldn't perform that well there, and not just because he wouldn't have as many other good people to pass to. Rather, the opposing team is going to devote about 4 defenders just to covering Messi, since it's obvious that this is where the threat is. Throw enough semi-competent defense players on someone, and you can make their performance seem much less impressive.

Have a look at the pictures from the Daily Mail coverage of the game against the Netherlands. In one, Messi is surrounded by four Dutch defenders. In another, he's surrounded by three. The guy is good, but that's a pretty darn big ask of anyone.

In other words, Messi may be better than the rest of the Argentine players by a large enough margin that opposing teams will throw lots of resources into covering him, making it harder for him to shine. In soccer, like in martial arts reality (as opposed to martial arts movies), numbers matter. Jet Li may beat up 12 bad guys at a time, but it you try that in real life, you're on your way to the emergency room or the morgue, almost regardless of your martial arts skill.

The last piece of the puzzle for this hypothesis is the question of why this doesn't happen when Messi plays at Barcelona.

I'm a real newb at soccer (evidenced by me referring to it as 'soccer' - you can take the boy out of Australia, etc.), but my soccer-following friends can tell me if I'm right here or not.

My guess is that the rest of the Barcelona team is much closer to Messi's level of skill than the rest of the Argentine team. This means that if opposing teams try to triple mark Messi in a Barcelona game, the rest of the attackers will be sufficiently unguarded that they'll manage to score and the result will be the same or even worse than if Messi were totally covered. As a result, Messi goes less covered and scores more.

There's a reason that the sabremetricians (who tend to be among the most sophisticated of sports analysers) talk about wins above replacement. You need to think about the counterfactual of if the person wasn't there, not the direct effect of what they did or didn't do in equilibrium.

Of course, the skeptics will point out the cases where great stars did manage to indivdiually play a big role in lifting their national teams to great success. What about Maradona, they say?

This is a fair question. Sometimes you really can get it past five defenders to win a world cup. Maybe that's what a true champion would have done yesterday.

Or maybe the English just weren't marking as well as the Dutch were.

Or maybe, even more pertinent, the rest of the Argentine team in 86 was sufficiently better in relative terms that England couldn't afford to mark Maradona as hard. The effect of this, if true, would be for Maradona's performance to look more spectacular relative to the rest of his team - having a good team means less defenders on you means more heroics. And when that happens, you look individually more brilliant, leading to you getting all the credit and making it look like you won the game single-handedly. If you really were that much better than everybody else, you would be less likely to deliver a performance that showed this fact to a novice observer.

Not many people think in equilibrium terms. This is why we analyse data.

The data case, however, is clear. Viva Messi!