Andrew Coyne and the lump of labour.

A while back I went to a movie on a weekday afternoon. As usual I bought my ticket at the automatic kiosk. As I crossed the lobby I noticed that there was nobody else around, neither patron nor employee. Even the snack counter was unattended. Ordinarily there’s an usher waiting to tear your ticket at the far end of the lobby, but this checkpoint was also unmanned, so I shrugged and made my way down the hall to my theatre, where I took my place among the half-dozen other lonely mid-afternoon moviegoers. After a few minutes the lights dimmed, the doors automatically swung closed, and the ads started. “Holy crap,” I realized. “We’re in a giant robot.”

Presumably there were two or three people working at the theatre that afternoon, and I happened not to run into any of them. But it’s easy to imagine how the handful of remaining jobs for humans could be turned over to machines. The ticket-tearer could be replaced with a barcode scanner and a swinging door. The snack counter could be replaced with a bank of vending machines. The only person that’s probably non-replaceable, in the near term, is the poor schlub who cleans the bathrooms. But we’ll figure out how to put him out of a job eventually.

Andrew Coyne, writing in the National Post, isn’t worried for that out-of-work toilet scrubber:

Artificial intelligence, we are told, if it does not altogether enslave us, will at the very least make us economically obsolete. Already it has started to replicate tasks previously thought the preserve of the human mind, from legal drafting to investment advice. Call it the Robots Will Take Our Jobs theory.

Where the RWTOJ thesis falls down is not in the idea that there will be jobs lost, but in its unstated corollary, that there will be no jobs created in their place.

Coyne calls this the “lump of labour” fallacy, as in, there’s a lump of jobs that needs a-doin’, and if part of that lump is handed over to machines, the workers presently doin’ those jobs will be left with nothing to do. Here’s why it’s a fallacy:

[I]n fact there is no fixed amount of work to be done. There is no permanent list somewhere of all the goods and services consumers might want or the jobs that might be filled providing them. Consumers’ wants are limitless, as is human ingenuity: not only do we generally prefer more of what we already want, but entrepreneurs are constantly thinking up new wants we didn’t know we had. Much of today’s workforce is engaged in making goods and services that not only did not exist a century ago, but had not been imagined.

Likewise, Coyne tells us, future displaced workers will find new jobs providing goods and services we can’t even imagine yet. Maybe not the exact same workers – he acknowledges that the individuals thrown out of their jobs might be irreparably hurt. But overall, the number of new jobs created will more than compensate for those lost. It’s happened before and we can count on it happening again.

He might be right. By definition we can’t imagine the things we can’t yet imagine. But I don’t find Coyne’s reassurances all that convincing, probably because I don’t think the “lump of labour” fallacy accurately describes the pessimists’ main concern about the prospective obsolescence of human labour. Because, you see, it’s not labour that we think of as an inflexible lump. No, we’re much, much more pessimistic than that! Our fear is that we humans are the inflexible lump.

It probably helps to put this argument in the form of a diagram. Here we have a visual representation of every thing a human can do:

human capabilities

The diagram, obviously, is incomplete. The list of unique things contained within this circle is infinite – we could, for instance, say a human is capable of writing a novel, or we could get more specific and say a human is capable of writing Lucky Jim, or we could get more specific still and say a human is capable of pecking out Lucky Jim one-handed on a typewriter while simultaneously sipping whiskey and smoking a cigarette.

Some of those refinements add value – if you can write Harry Potter and the Goblet of Fire instead of Lucky Jim, you can earn a half-billion dollars instead of a few paltry million. Most of the refinements, like typing one-handed, or in your underwear, or in a pressurized dome at the bottom of the sea, add no value at all. While one can imagine an endless number of refinements to any conceivable human activity, nearly all of them are economically worthless.

Furthermore, while the number of unique things in the circle is infinite, the circle is not all-encompassing. Undoubtedly in the future people will be employed doing things we don’t currently have words to describe – amphibiating the sensoid bits on tri-dimensional metadroms, maybe. But whatever as-yet-inconceivable things we may someday do, they all lie within the set of things humans are capable of doing. That set is limited not by the human imagination, but by human biology. Our lifting capacity is constrained by having only two arms which can exert only a certain amount of force. Our ability to do mental calculations is limited by the number of figures we can retain in our short-term memories. As for amphibiating those sensoid bits, whatever the job entails we can be pretty sure of certain things it won’t entail, like breathing sulfur dioxide, or seeing frequencies outside the visible spectrum.

I foresee your objection. With the assistance of machines, you say, we can augment our lifting, calculating, breathing, and seeing capacities. That’s why we invented machines in the first place, starting with the simplest tools – like the sharp-edged stones our proto-human ancestors used to tear the flesh of animals. I suppose an apeman who subscribed to the “lump of labour” fallacy would have complained that sharp-edged stones were putting able-bodied apemen out of work – where once three or four workers would have used their nails and teeth to dismember a hartebeest, now the job could be done by one hairy fellow with a stone. Brighten up, Andrew Coyne would have told those unemployed apemen, now you can spend more time hunting and gathering. And he would have been right!

Let’s put, right next to a circle representing proto-human capabilities, a second circle representing all the things technology could do, circa 1 million BC. We’ll have to zoom in to see it:


I’ve made the little circle overlap the big circle, because right from the start, technology is impinging on the previously exclusive domain of proto-humans – the sharp-edged stones are displacing manual labour. But that’s all right, because there’s literally an infinite number of other things the displacees can do with the time freed up by the invention of the sharp-edged stone. Learn to make fire, for instance.

You can probably see where I’m going with this, so let’s fast forward:

machines versus humans

You’ll notice that the circle of human capabilities hasn’t expanded since the arrival of Homo sapiens. Better nutrition and education may have made us a little stronger or smarter, but the child of one of our Neolithic forebears, transplanted to modern times and raised as one of us, would be well within current mental and physical norms – perfectly capable of manning the snack counter at the local multiplex.

It’s not greater innate intelligence that has enabled us to make machines of a sophistication that our ancestors couldn’t have dreamed of. It’s the fact that new technologies are built on top of existing technologies. You can’t invent a self-driving car unless you’ve already invented a car, which depends on the earlier invention of the internal combusion engine, and so on back to the discovery of fire.

Human capabilities, unfortunately, don’t accumulate in the same way.

So, with every passing year, the expanding circle of technological capabilities takes a bigger bite out of the static circle of human capabilities. Every year the number of jobs for which humans and only humans are qualified diminishes. There are still, to be sure, an infinite number of options within the narrowing crescent of skills exclusive to humans, but among those infinite options it becomes harder and harder to conceive of an as-yet-untapped skill that can be turned into an economically viable career. And the crescent keeps shrinking.

And since it takes a fairly high degree of imagination and inventiveness to think up an entirely new career that won’t immediately be at risk of being taken over by machines, the many, many humans who aren’t particularly imaginative or inventive – I count myself among that number – are left to compete with robots in the penumbra of activities where our capabilities overlap. How do humans compete? Since we can’t work more quickly, or more reliably, or put in longer hours than machines, we have to work more cheaply.

This is a great outcome for employers. You don’t have to actually bring in machines to replace your workforce of fragile, clumsy, illness-prone, emotionally unpredictable humans. The mere hint that they’re replaceable should be enough to subdue their uppitiness. And every year the machinery gets faster, cheaper, more reliable…

I imagine to a guy like Andrew Coyne, who probably hangs out with other high-IQ types capable of dreaming up new goods and services and turning them into profitable businesses, the limitations of the human lump are less apparent than they are to us middling-IQ proles out here in lumpenland. “Don’t worry,” he tells us, “I read in Fast Company that the tri-dimensional metadrom industry is going to need tens of thousands of trained amphibiators.”

“But what kind of education will that require?”

“Just a two-year diploma.”

“But I’m a thirty-nine year old long-haul trucker.”

“No problem, the government will help pay for your retraining.”

“But how long before they develop robot amphibiators?”

“A decade and a half, at least.”

“And what do I do then?”

“By then tetra-dimensional metadrom technology should be in full swing.”

“And they’ll still need experienced amphibiators?”

“Well, you might have to go back to school again…”

“You realize I became a long-haul trucker in the first place because I didn’t do well in school, right?”

“Or maybe you lost interest in school because you thought you could have a decent-paying career without a post-secondary diploma.”

“Yes, yes, that’s exactly the point I’ve been trying to make!”

“I literally can’t understand what you’re complaining about.”

“I can see that. Well, thanks for your pep talk anyway, Andrew Coyne.”


Equality and homogeneity.

I picked up a used copy of G.K. Chesterton’s 1906 biography-cum-critical-appreciation Charles Dickens on a visit to the UK five years ago, but to avoid spoilers I held off tackling it until I’d read all of Dickens’s novels at least once. I finally polished off The Mystery of Edwin Drood last month, freeing me to read the Chesterton book.

One of its major themes is Dickens’s egalitarianism, his “democratic optimism”:

We shall consider Dickens in many other capacities, but let us put this one first. He was the voice in England of this humane intoxication and expansion, this encouraging of anybody to be anything.

Which strikes very comfortingly on the modern ear – but it quickly becomes apparent that Chesterton’s notion of equality is very different from the version now championed. At one point he offers this telling digression:

In one sense things can only be equal if they are entirely different. Thus, for instance, people talk with a quite astonishing gravity about the inequality or equality of the sexes; as if there could possibly be any inequality between a lock and a key. Wherever there is no element of variety, wherever all the items literally have an identical aim, there is at once and of necessity inequality. A woman is only inferior to man in the matter of being not so manly; she is inferior in nothing else. Man is inferior to woman in so far as he is not a woman; there is no other reason. … If everything is trying to be green, some things will be greener than others; but there is an immortal and indestructible equality between green and red.

That is to say, when everyone’s worth is measured by a single criterion – by IQ, by wealth, by physical strength – then equality is an impossible goal. It’s only when people are liberated to pursue their manifold, unpredictable, and often hilarious excellences that true equality – the equality of the human spirit – becomes visible.


In 2013 my father died. As his only child and heir I received a sizeable life insurance payout, the sum of various small amounts scattered among his various bank accounts, and a modest monthly pension which will carry on through 2020. This hardly added up to what a middle-class Canadian would describe as a fortune, but it was sufficient to free me, a single person with inexpensive tastes, from the necessity of paid employment for a while.

I used my freedom to write a novel.

…Or that’s the self-glamorizing way to put it. It would be as accurate to say I pissed away my dad’s life savings for three years, during which time I incidentally happened to produce a novel – which, even if I somehow get it published, is highly unlikely to earn back even a tiny fraction of what I pissed away, let alone the money I failed to earn by not working.

I didn’t get here through stupidity. I knew full well that a multi-year gap in the middle of my prime wage-earning years would blow my chances of ever owning a home, or raising a family, or being treated by anyone as a person of importance. It’s not that I don’t value any of those things, but you can see by my choices that I don’t value them that highly.

Meanwhile, given my level of laziness, I knew I was unlikely ever to write a novel while simultaneously working a full-time job. And writing this novel was important to me.

So I’m okay with my decision – for now. Check back with me when I’m a pensionless sixty-five year old starving in a ditch.


Assuming I’ve correctly estimated my expenditures, it appears that last year I scraped by at roughly the Low-Income Cut-Off, or LICO – the closest thing Canada has to an “official” poverty line – for a single person.

I don’t think of myself as living in near-poverty. My apartment is mostly bug-free. My budget allows me two bottles of liquor a month, sufficient for my current level of incipient alcoholism. A couple times a year I fly out to see relatives in Toronto, where I make a show of spending liberally so they don’t worry about me.

Of course it would be a very different thing if I’d been grinding out forty hours a week at Tim Hortons to bring home an equivalent income. A LICO-level standard of living is quite comfortable when combined with the freedom to sleep in as late as you like.

Comfortable for me, I mean. Your results may vary.


I have this thought experiment that strikes me as so obvious it’s probably not even worth writing down. And yet I haven’t seen it expressed this way anywhere, so maybe it’s not that obvious, who knows.

Suppose all the wealth in a country is redistributed equally among all its citizens. All debts are cancelled, all money and goods are apportioned equally, all the land is divided in such a way that everyone’s share is equally productive.

It’s a wealthy country. There’s more than enough for everyone to live comfortably. No-one has to work at Tim Hortons any more – though they’re welcome to, if they like.

If you leave this egalitarian paradise alone for a while, then check in at the end of, say, ten years to see how things are progressing, will everyone still be equally wealthy?

Perhaps you’ll find that a few wily and unscrupulous operators have fleeced their more trusting fellow citizens of all or most of their wealth. But that wasn’t really a fair test. Those who had been well-educated, well-connected, and well-off prior to the redistribution had an advantage over the previously disadvantaged and downtrodden.

So let’s run the experiment again, only this time we’ll kidnap the young children of our failed socialist state and resettle them in a brand new, unspoiled country, where they’ll all be dressed identically, housed identically, fed identically, and educated to a common standard. When the kids reach eighteen the wealth of their new land will be shared out again, and this time, none of them will have any advantage over the others. Surely when we check back in at the end of ten years…

Huh, there’s still widespread inequality. It turns out the kids have different tastes, different interests. Some enjoy the simple life while others like to decorate their homes with fancy and expensive things. Some are content to hew wood and draw water while others prefer to sleep in late and write unsellable novels. Others enjoy manufacturing things that are useful and necessary, which they can exchange with their neighbours for a small share of their neighbours’ wealth. Still others have discovered that having extra wealth is in itself rather enjoyable, and they’re okay with spending their spare time doing not-very-enjoyable things – even working at Tim Hortons – for the chance of making a little more.

What can you do? Kidnap another generation of imperfectly equal babies, I guess. You’ll just have to brainwash the little suckers.


One of the implications of my thought experiment is that the more identical the citizens are in their tastes, interests, and priorities, the more enduring the equal distribution of wealth is likely to be. Which raises the question – did those countries that are celebrated for their egalitarianism get that way because they pursued egalitarian policies? Or are they naturally egalitarian because their citizens exhibit a high degree of homogeneity?

And what happens when homogeneous cultures attempt to assimilate large minorities with very different sets of tastes, interests, and priorities?

…But now we’re getting into the touchy subject of group differences, where you can easily get yourself blacklisted for saying the wrong thing. Better to stay away from specifics.

I’ll only suggest – delicately and humbly – that if you and I can have different preferences about how to spend our time and money, leading to differences in life outcomes, isn’t it probable that different groups, with different histories, different backgrounds, will tend to have different preferences that lead to different outcomes?

The modern version of egalitarianism proclaims that women and men, gays and straights, Jews and gentiles, all must be distributed in every profession, in every sphere of activity, at every level of prosperity, in proportion to their overall numbers. Only then will we all be equal.

But the price of that equality may be that women and men, gays and straights, Jews and gentiles – you and I – lose our distinctive identities.

My own old-fashioned view is that we’re equal already, in the Chestertonian sense – “the immortal and indestructible equality between green and red”. But to the modern progressive mind, that sounds like complacency. Greenness may be just one of many possible yardsticks for comparing people and groups, but it’s the one the modern world is built around. To tell the ungreen to be satisfied in their redness, or yellowness, or blueness, while we continue to adulate green above all, is bound to lead to resentment.

I’m not sure there’s a solution to this problem, or anyway one that doesn’t involve illiberal attempts to re-engineer human nature – precisely what I’m opposed to. So long as people have the freedom to pursue different paths we’ll tend to group ourselves around common values and interests. And so long as different groups exist, jealousy, suspicion, and hostility will arise between them. The best we can do is try and keep these feelings from breaking out into violence and persecution.

In any case, my complacent prediction is that human variety, and human conflict, will outlive all the clumsy attempts by the modern egalitarians to stamp them out.


True and original…or, why we write (or don’t).

For the last few years most of my intellectual fuel has been burned up in writing my first novel. But since I finished the novel this spring (apart from a few minor tweaks and rearrangements, still ongoing) I’ve had a hard time motivating myself to resume my old habit of blogging. Laziness is clearly a factor, but I don’t think I’m any lazier than I used to be – only more realistic about what I expect my writing to achieve.

Suppose we evaluate every piece of writing, existing or potential, on the following dimensions:

1) Truthfulness.
2) Originality.
3) Effort.

By “truthfulness” I mean, okay, yes, empirical truth: Did this happen in the real world? Can it be relied on, to a reasonable degree of statistical certainty, to happen again? But fiction can also be truthful, if its artifice reveals deep-down truths about humanity, social forces, whatever. When a character in a story does something no human would actually do, “It’s fiction!” isn’t a defense – unless the character is a robot or an alien – in which case “It’s science-fiction!” is a defense, provided that the robot’s or alien’s behaviour points to something true about our own reality. I guess what I’m saying is that truthfulness is a slippery and debatable concept – but that doesn’t free us to knowingly write lies.

“Originality” seems pretty straightforward. The older you get, the more you read, the more you start seeing the same old ideas, the same old arguments, coming around again and again. This doesn’t mean the old ideas or arguments are untrue. Sometimes people need to be reminded of things they already know, or used to know, or would have known if they hadn’t been misled by false-but-original intellectual fashions into believing daft things. Sometimes the old, commonplace ideas are so thoroughly forgotten that they become original again. Which is to say that originality is almost as slippery and debatable as truthfulness.

By “effort” I mean how much effort goes into writing. It’s easy to write something true if you’re not concerned about originality: Japan is a mountainous and densely-populated archipelago off the eastern coast of mainland Asia. It’s easy to write something original if you’re not concerned about truth: Japan was founded in 1949 by lobster people displaced by atomic testing in the Bikini Atoll. Writing something both true and original is exceptionally difficult. So difficult that very few writers ever manage it. To judge by results, most don’t even try.

Suppose you have an idea that’s true, but not very original. Should you go to the effort of writing it down? If you have a fair degree of certainty about its truthfulness, it’s probably worth sharing, if only to help increase the amount of truth in the world.

Alternatively, suppose you have an idea that may or may not be true, but you’re pretty sure no-one’s ever had before. In that case, again, it’s probably worth sharing, albeit at the risk of misleading people with what could be an untruth.

But if you’re only, say, fifty percent sure your idea is true, and fifty percent sure it’s original, is it really worth the effort to write down? Probably not.

One of the nice things about having a novel in progress is you can use it as an outlet for all the ideas you have that might not meet your standards for truthfulness and originality: Okay, I don’t believe this all that strongly, but it’s the kind of thing Katie (or Roland, or Helmut) might believe, so I’ll just rephrase it into that character’s voice and…and you look up from your computer screen and it’s 3 AM and, marvel of marvels, you’ve actually met your day’s quota. You can go to bed without hating yourself.

But a novel can’t be merely a repository for every half-baked idea you’re embarrassed to take responsibility for. Unless those ideas add up to a true and original whole, you’re better off saving yourself the effort.

Does my novel meet that standard? Probably not. But then, very few do.


The medical men of Middlemarch.

There must be two dozen books on my shelves that I’ve never read, but recently, after coming across a couple references to how dauntingly unreadable Middlemarch is, I decided to verify my hazy impression that I’d found it absorbing from the start.

Maybe “absorbing” is the wrong word. Victorian novels demand sifting, extracting, unpacking. Many sentences need to be double-read: once through to sort out how the clauses relate to each other and again to determine how they relate to the story. You’d think I’d find it tedious. I’m not enchanted with complexity for its own sake. My eyelids tend to droop when I read poetry, for instance, even stuff I know I should admire, like Shakespeare. George Eliot begins each chapter with an epigraph, usually poetical; I skim them. But the story is interesting enough that I don’t mind unravelling the prose when it gets knotty. Clive James once disparaged another literary pretzel-twister, Edward Gibbon, for “the kind of stylistic difficulty which leads its admirers to admire themselves, for submitting to the punishment.” Perhaps liking Middlemarch is a kind of masochism.

The other day, awaiting the inevitable callback from my garage to upsell me from a routine oil-and-lube to major repairs, I found myself wondering why mechanics can’t operate the way Mr. Lydgate does in Middlemarch. I know that sounds unbearably pretentious but it’s what I was thinking.

Most readers remember Middlemarch for the thwarted romance of widowed Dorothea Casaubon and the passionate but aimless Will Ladislaw. Mr. Lydgate is the hero of what a screenwriter would call the “B-plot”; to quote the rear cover copy on my Signet Classic paperback, Lydgate is “an ambitious young doctor who is betrayed by his wife’s egoism and his own inner weakness.” The rather haughty surgeon-apothecary, newly arrived in Middlemarch, offends local custom by acting on the principle that a doctor should “simply prescribe, without dispensing drugs or taking percentage from druggists.” He explains that,

it must lower the character of practitioners and be a constant injury to the public if their only mode of getting paid for their work was by their making out long bills for drafts, boluses, and mixtures.

This explanation gets rather muddled in third-hand transmission to a competitor:

The next day Mr. Gambit was told that Lydgate went about saying physic was of no use.

“Indeed!” said he, lifting his eyebrows with cautious surprise. (He was a stout, husky man with a large ring on his fourth finger.) “How will he cure his patients, then?”

“That is what I say,” returned Mrs. Mawmsey, who habitually gave weight to her speech by loading her pronouns. “Does he suppose that people will pay him only to come and sit with them and go away again?”

This business about Lydgate and his rivalry with the town’s other “practitioners” is one of those subtle questions of class and custom that gets lost on the modern reader. On first reading Middlemarch I failed to notice that Lydgate is referred to as “Mr.”, never as “Dr.” The latter honorific is reserved to those, like the town physicians, Dr. Minchin and Dr. Sprague, who have “been to either of the English universities and enjoyed the absence of anatomical and bedside study there”. In other words they have been more expensively though not more comprehensively educated. Mr. Lydgate, by contrast, after his apprenticeship to a country apothecary, has studied at Edinburgh, Paris, and London, there picking up numerous progressive and unsettling ideas.

Middlemarch is set just before and after the accession of William IV in 1830, a time of much reformist ferment. A decade and a half earlier, Parliament had made a stab at straightening out the chaotic system of medical accreditation which then prevailed in the United Kingdom. As S.W.F. Holloway explained in the July 1966 issue of the journal Medical History (“The Apothecaries’ Act, 1815: A Reinterpretation: Part II“) , the new system effectively defined nearly all medical practitioners as apothecaries, and regulated them as such. Traditionally apothecaries had filled a role roughly analogous to pharmacists today, but the lines between the different classes of medical practitioners had become blurred. As Holloway quotes a contemporary source:

In London, and some of our other great towns, there are physicians and surgeons who do not compound or vend medicines; but in the country this distinction of the three branches of the profession does not exist. Except in a few of our largest towns, every man who practises medicine at all, likewise deals in drugs, and must do so … If he were not to supply [patients] with medicines, there is nobody else from whom they could procure them. The consequence is … that over all England the medical practitioners are also apothecaries, within the meaning of this act.

Physicians were an exalted class who could afford to forgo the unseemly necessity of seeking licensure as apothecaries, which required a five-year apprenticeship as an apothecary. Men of substance who could afford a degree from Oxford or Cambridge, physicians attended the sickbeds of the titled and propertied; the customary fee for a consultation was one guinea. All other medical men, known inclusively as general practitioners, were traditionally forbidden to charge an attendance fee. Their sole source of income was the “drafts, boluses, and mixtures” they peddled. As Holloway explains:

This system led not only to [the general practitioner] being considered a tradesman in an age when trade was regarded as a debased occupation: it also exposed him to the accusation of over-charging and over-prescribing. The apothecary, it was said in 1703, “makes the deluded Patient pay very extravagant Fees by the intolerable Prices he puts on all the cheap Medicines, and by passing upon him very many more Doses than the Disease requires or the Constitution can bear”.

(You can see why my mind ran to Lydgate as I sat awaiting the call from my mechanic, to pass upon me a Dose my Constitution could not bear.)

By charging for doctoring and not for drugs, Lydgate is offensive not only to the physicians on whose exclusive prerogative he is trespassing, but to his fellow general practitioners Mr. Wrench and Mr. Toller, to whom he appears to be trying to overreach his station:

“I say the most ungentlemanly trick a man can be guilty of is to come among the members of his profession with innovations which are a libel on their time-honoured procedure. That is my opinion, and I am ready to maintain it against anyone who contradicts me.”

“My dear fellow,” said Mr. Toller, striking in pacifically and looking at Mr. Wrench, “the physicians have their toes trodden on more than we have. If you come to dignity it is a question for Minchin and Sprague.”

“Does medical jurisprudence provide nothing against these infringements?” said Mr. Hackbutt with a disinterested desire to offer his lights. “How does the law stand, eh, Hawley?”

“Nothing to be done there,” said Mr. Hawley. “I looked into it for Sprague. You’d only break your nose against a damned judge’s decision.”

What decision is this? Holloway again:

The first step came in 1829 when Chief Justice Best, in Towne v. Gresley, held that an apothecary might charge for his attendance, provided he made no charge for the medicines furnished. But in the following year Lord Tenterden ruled that an apothecary might recover for reasonable attendance as well as for medicines.

Per this judgement, there’s nothing stopping Mr. Lydgate from charging a consulting fee and also pushing lucrative potions on his patients. But he refrains as a matter of principle.

Perhaps an idealistic thinker of the Lydgate type will one day reform the automotive repair industry so that garages are no longer incentivized, as apothecaries once were, to over-prescribe service. A consulting mechanic would examine our car and determine which fluids really needed flushing, which gaskets really needed replacing, then write out a prescription which we’d take to a practicing mechanic up the road, who’d actually carry out the repairs. I’m sure the first such practitioner would arouse much resentment and resistance among his fellow tradespeople. It would make good drama for a novel. Not the main story, probably. A B-plot.


Do crosswalk timers cause more collisions, and if so, why?

Note: Here’s the fourth in a trove of unpublished blog posts I salvaged from an old laptop. The news story referenced is now a few years out of date, but it’s still interesting…at least to me.

crosswalk timer rear-end collision

In the Vancouver Sun a few weeks ago (April 2013) there was an article about how crosswalk timers – the walk signals at many intersections that count down the number of seconds till the light turns yellow – reduce collisions between vehicles and pedestrians, but increase collisions between vehicles.

Arvind Magesan, the University of Calgary researcher who discovered this phenomenon, theorizes that the extra accidents are caused by drivers speeding up to beat the red light:

“If a road is really busy and it’s slow-moving traffic, you can’t really use a countdown to decide to drive faster to get through a light. In places like that, it seems to have a positive effect. It reduces the rates of accidents,” Magesan said Tuesday.

“In places where a driver does have the opportunity to react to this thing – which they are not supposed to be reacting to in the first place – they use the information and accelerate,” Magesan said.

I never realized that drivers “are not supposed to be reacting to” these timers. When I took driver training, not so very many years ago, the instructor explicitly advised us to keep an eye on crosswalk signals to get a sense of when the light was about to change. The idea was that when we saw the blinking red hand, we should be prepared to slow down. But a more reckless driver will obviously take the red hand as a cue to make haste through the intersection.

It’s not only crosswalk signals that are interpreted in ways not intended by the designers. Recall that scene in the movie Starman, where Jeff Bridges’ stranded alien, having learned to drive by observing Karen Allen, interprets the meaning of the traffic signals as “red light, stop – green light, go – yellow light, go very fast.”

A crosslight timer is just a more explicit blinking red hand, which in turn is a kind of yellow light advance-warning. They all serve the same purpose; to tell us roughly how long until the light turns red. What we do with that information is up to us. Reckless drivers use it one way, cautious drivers another.

I was skeptical of Arvind Magesan’s crosswalk timer study, so I tracked it down online. Co-authored with Sacha Kapoor, the study is disguised under the unrevealing title Paging Inspector Sands: The Costs of Public Information. In supporting their findings, Magesan and Kapoor assert (citing earlier research) that “providing drivers with information about the time until a light change causes drivers to approach traffic lights more aggressively on average.” That on average is key. Some of us use the information the way my driving instructor wanted us to – to give us a little extra warning that we need to slow down. Others use the information to determine how much they need to speed up to avoid wasting time at a red light.

I would have thought the two effects would balance out – cautious drivers would avoid crashes by slowing down, reckless drivers would get into more crashes by speeding up, and the net effect would be a wash. But when you think about it, it makes sense that accidents would increase even if crosswalk timers didn’t, as the authors claim, make drivers more aggressive on average. Because you need both a reckless and a cautious driver to create the conditions for the kind of crash they describe.

Let me explain. After I read the news story, but before I read the study, I assumed that the higher accident rate was caused by vehicles speeding through yellow lights and colliding head-on with vehicles attempting to turn left. This was consistent with Magesan’s assertion that it was at less busy intersections where the increase in accidents occurred. At busy, slow-moving intersections, vehicles are unable to work up a head of steam, so the crosswalk timer makes no difference. But at less busy intersections, drivers who see the timer from fifty or a hundred yards out might stomp on the gas to try and beat the light. Since less busy intersections tend not to get designated left turn signals, I reasoned, there are more likely to be vehicles waiting there to turn left on yellow, hence, more opportunities for smashes.

My assumption was wrong. The authors broke down the accidents by type, and it seems that collisions involving a “turning movement” increased only negligibly when the crosswalk timers were installed. The greatest increase was in rear-end collisions. What’s more, speeding wasn’t the major cause of the accidents. Tailgating was.

Consider that, in any given pair of vehicles approaching an intersection, there are four possible combinations:

1. Cautious driver following cautious driver.
2. Cautious driver following reckless driver.
3. Reckless driver following reckless driver.
4. Reckless driver following cautious driver.

In scenarios 1 and 2, the crosswalk timer ought to help cautious drivers more accurately gauge when to start slowing down. This can be useful, especially in slippery winter conditions (the study was conducted in Toronto) where braking distance might be three or four times longer than usual. You’d think some number of rear-end collisions would be avoided here.

In scenario 3, where a reckless driver follows a reckless driver, they should both zoom safely through the intersection.

The typical crosswalk timer-induced accident involves scenario 4, where a reckless driver follows too closely behind a cautious driver. The reckless driver sees the timer and concludes that if he sticks closely to the vehicle ahead, he can sneak through the light just as it changes. He reasons that if he can make it through the intersection on time, the driver ahead, who obviously can see the timer just as well as he can, has even less reason to stop. What he doesn’t realize is that the cautious driver ahead isn’t interpreting the signal the same way. The cautious driver hits the brakes just as the reckless driver hits the gas.

In this case, the added information provided by the crosswalk timer is making things worse by facilitating both the recklessness of the reckless driver and the caution of the cautious driver. The accident would have been avoided if the reckless driver had slowed down or if the cautious driver had maintained his speed.

Whatever behavioural mechanism is at work, it appears that offering too much information can in fact make traffic signals more dangerous. Does it follow that less information makes signals less dangerous? According to Wikipedia, the very first modern traffic light included a warning that the light was about to change – a buzzer, back then, rather than a yellow light. How would drivers behave at an intersection where there was no warning at all – no buzzer, no yellow light, just red and green, stop and go? It might lead to more running of red lights. Or it might make everyone more cautious as they approached an intersection, for precisely that reason.

If it turns out having some warning is in fact safer than having none, what’s the sweet spot, safety-wise, between not enough warning and too much? And what’s the tradeoff between that optimally safe arrangement and the optimally efficient flow of traffic? After all, the safest traffic condition is probably gridlock, where no-one can build up enough speed to hurt anybody else.

Recently a few blogs have featured this video of Poynton, England, where they removed all traffic signals from the central intersection. In order to navigate the new uncontrolled double-roundabout safely, drivers are forced to actually pay attention to pedestrians and their fellow drivers. The result, apparently, is a much safer and smoother flow of traffic than you might expect.

Predictably, most of the discussion of this innovation has occurred on crunchy-leaning urban design blogs, and is uniformly uncritical. On some forums you can find a few skeptical comments from cyclists and drivers who claim that the new configuration is simply shifting traffic problems to routes that miss the town. (Not that this is necessarily a bad thing.)

Me, I’m agnostic. I’m glad planners are trying out new ideas, but I wonder if the current mania for walkability, shared roadways, and related New Urbanist tropes will in the long run suffer a backlash just as the previous mania for Brutalist skyscrapers, vast concrete plazas, and neighbourhood-gutting freeways did. My impression of that bizarre double-roundabout in the Poynton video is that it looks pleasant enough for pedestrians, but as a driver I’d gladly go a few miles out of my way to avoid it. But then, I’m a North American, and we’re famously flummoxed by roundabouts.

Incidentally, Magesan and Kapoor’s policy recommendation is that crosswalk timers be replaced with audible countdowns that only pedestrians, and not drivers, can hear. If I were a policymaker I’d be reluctant to act on this until the study has been replicated a few times by other researchers in other places. Since many cities seem to be installing these timers, there should be plenty of data available.

As a fairly cautious driver, I’d prefer to keep the visible timers – I find them useful for regulating my speed when conditions are slippery, as they so often are in Canada. I suppose I’ll feel differently if I’m ever rear-ended at one of these intersections.



Brexit: Diff’rent yokes for diff’rent folks.

As a Canadian, I can’t say I was terribly invested in the question of whether the United Kingdom should or shouldn’t leave the European Union. I suppose my slight preference was for a Leave vote, just as my slight preference in the 2014 Scottish referendum was for independence, just as I was pretty sanguine about the prospect of Quebec’s departure during that province’s referendum campaign way back in 1995. I have a sentimental streak of small-is-beautiful conservatism which, if I were English, might lead people to dismiss me as a Little Englander. I’m not altogether clear on why that’s considered a slur.

Up until a few years ago – say, 2007 – I never really questioned the value of EU membership for Britain. The great convenience of free trade and travel around the continent more than compensated, I would have said, for the pin-pricking of petty rules dreamed up by bureaucrats in Brussels.

Then the subprime mortgage crisis happened, leading in 2010 to the Greek debt perma-crisis, now entering its seventh year. Don’t ask me to explain how a real estate bubble in the United States led to the near-default of Greece, threatening to tank the entire European economy. I’m sure you’re a diligent reader of the financial news and you can explain how it’s all the fault of a) hyperregulatory welfare statism or b) the machinations of predatory bankers (pick one).

I’m not a particularly intelligent person, but neither am I outrageously dumb. Let’s say I’m at the 50th percentile, IQ-wise. And while I’m not as well-informed as I know I should be, I do make some effort to keep abreast of the news. If it helps you to place my level of financial literacy, I read the book version of The Big Short a few years before the movie came out – in other words, I’m able to follow business reportage at an airport-bookstore level – putting me, once again, at let’s say the 50th percentile. And after reading nearly a decade of argumentation about the causes of the economic clusterhump we’re still crawling our way out of, I still have no goddamned clue what happened. And about half the population has less of a clue than I have.

That’s ultimately why I’m wary of the EU, and NAFTA, and the WTO, and any other arrangement that erodes national sovereignty. It’s not that I think Greece, or Quebec, or the UK are any less likely on their own to blindly drive into the quicksand than they would be as provinces of a vast multiethnic empire. But as long as national borders exist, national catastrophes are to some degree localized. The effects are concentrated among people with the same cultural values, speaking the same language, using the same currency. Which means it might be possible, barely, for humans of limited intellect to wrap their heads around the causes of the catastrophe and take steps to contain it. And for the equally slow-witted citizens of nearby countries to learn from their neighbours’ errors before the same catastrophe overwhelms them.

If you have libertarian or neo-liberal or internationalist leanings you’re probably rolling your eyes at this retrograde stuff. Computers, the internet, global satellite communications – these things, you say, can’t be un-invented. We’re stuck in this borderless, speed-of-light reality, like it or no. The best thing is to set up a system of supra-national governance where the world’s brightest minds will be empowered to head off crises before they happen.

I guess it’s a matter of outlook. Down here at the 50th percentile I don’t have much confidence that the bright minds at the 99th percentile are as competent to foresee crises as they seem to think they are. In fact, reviewing the history of the last decade and a half or so, what I see is a succession of crises caused or at least exacerbated by the reckless utopianism of 99th-percentile types. The reckless utopians of the so-called right who were confident they could “drain the swamp” of Middle Eastern medievalism. And the reckless utopians of the so-called left who thought the symbolic importance of a common currency outweighed the dangers of duct-taping together a group of incongruous European economies.

It’s true that voters are at least partially to blame for elevating their leaders to positions where they could do such damage. But it’s one thing to elect a nitwit who near wrecks your own country. The most frustrating thing in the world must be to find yourself under the yoke of some nitwit you never even voted for, whom you believe – fairly or not – to be wrecking your country from the outside.

I’ve never really experienced that. But I can imagine how it must have felt for Scots who were powerless to thwart Margaret Thatcher’s reforms, or Quebecers offended by Pierre Trudeau’s constitutional swashbuckling, or Brits waking up one day to discover that Angela Merkel had unilaterally abrogated the rules governing asylum seekers in Europe.

Utopianism is the belief that for any problem there is one right, perfect, universal solution which, once it’s imposed, will quiet all critics and bring about perfect contentment forever. A more modest problem-solver says no, in fact, there are any number of potential solutions, each of them involving tradeoffs that will be more satisfactory to some groups, less so to others. Dissatisfied groups cluster together to air their grievances and wind up forming communities with shared values and assumptions. This has been happening for as long as humans have existed – in fact, it’s why there are different countries. It follows that different solutions are likely to be more satisfactory for some countries than for others. Why not, therefore, let different countries solve their problems in their own different ways?



The Know-Nothing.

If I had to choose a passage to introduce you to Scott Alexander’s terrific blog Slate Star Codex, this isn’t the one I’d go with. But it happens to be one I want to riff on, so here it is…

Imagine a space-time rift brings a 19th-century Know-Nothing to your doorstep. He starts debating you on the relative merits and costs of allowing Irish people to mix with the rest of American society. And you have a hard time even getting the energy to debate him. You’re like “Yeah, there are some Irish people around. I think my boss might be half-Irish or something, although I’m not sure. So what?” And he just sputters “But…but…Irish people! It’s not right for Irish and non-Irish people to mix! Everyone knows that!” And not only do you not think that Irish people are a Big Deal, but you’re about 99% sure that after the Know-Nothing spends a couple of months in 21st-century America he’s going to forget about the whole Irish thing too. There’s just no way someone seeing how boring and ordinary Irish-Americans are could continue to consider worrying about it a remotely good use of their time.

The rest of this old post (from 2013) has nothing to do with the Irish. Alexander is a practitioner of polyamory, you see, which is some kind of modern offshoot of what used to be called free love, and he’s making a point about how unthreatening polyamory is, once you get to know the people who practice it. That subject doesn’t interest me at all – I endorse wholeheartedly his title (if not necessarily his argument): Polyamory Is Boring. But his analogy got me wondering. Would the Know-Nothing really come around as easily as Alexander imagines?

Let me extend the scenario. After your fruitless conversation with the time traveller, you part ways. A few months later, after he’s had time to settle in, read the newspapers, catch some TV, strike up conversations with cab drivers and strangers in bars, you run into him again. “Well, what do you think now?” you say. “The Irish aren’t so scary, are they?”

He shakes his head sadly. “You poor fool,” he says. “Everything we warned you about has come true. Irishness has completely overwhelmed the country. It surrounds you. And you can’t even see it.”

Of course, you ask the Know-Nothing to elaborate. But here my imagination fails – I have no idea what he’s observed in the intervening weeks to make him so depressed. I, like you, grew up in a culture so marinated in Irishness that its effects are totally invisible to me.

If you or I were to shimmer across the invisible space-time boundary that separates us from the alternate-history 2016 where the Know-Nothings successfully kept out the Irish, who knows what we’d find. I suspect we wouldn’t much care for the place. We’d find it stuffy, and exclusionary, and most importantly, in some indefinable way, insufficiently Irish.

But the fact that we prefer having been brought up in our own universe doesn’t mean that our side’s arguments (I mean, the arguments of the 19th century folks who took what we interpret to be “our side” in this long-dead dispute) were correct.

It just means our side won.


A few years back, in a post about cratering American birthrates (which I somehow tied in with a discussion of Robert Heinlein’s 1950 sci-fi novel Farmer in the Sky), I wrote that

If America wants to stay productive, it’s hard to see how it (and other developed countries in the same demographic boat, like Canada) can avoid taking in more newcomers.

I then went on for a few paragraphs about the downsides of large-scale immigration – problems of assimilation, mainly. But, I brightly concluded,

Eventually, most likely, the West will absorb and be fortified by the immigrant wave, as it has previous waves.

Recently I re-read that passage and I thought – wait, what? Do I have any empirical reason for believing that we will be “fortified” by new immigrants? What does that even mean?

I suppose I was making the same assumptions that underlie Scott Alexander’s parable of the time traveller. Strength in diversity! A nation of immigrants! The cultural mosaic! Irish, Ukrainians, Jews, Chinese – they’ve all successfully integrated, so why shouldn’t the next batch?

Only…if I were to extend the above list of immigrant ethnicities I would pretty quickly arrive at a few that have, as yet, integrated noticeably less well. (Depending where you live, you probably have a different unsuccessfully-integrated group in mind.) Maybe these groups aren’t to blame for their exclusion; maybe they’ve been discriminated against by the native-born. Maybe “integration” isn’t even a desirable goal. I’m not interested in arguing those points right now. I only mean there are differences between Irish immigration in the 1850s and Jewish immigration in the 1910s and (say) Syrian immigration in the 2010s. Differences in “them”, obviously, but just as importantly, differences in “us” – how many of us there are, what kinds of communities we live in, what jobs are available, and perhaps most of all, what we believe.

Some of those differences should make integration less painful. We’re certainly less overtly racist than we used to be, and we pay lip-service (sometimes without knowing exactly what we mean) to tolerance and diversity and so forth. On the other hand, we’ve adopted views on things like public displays of sexuality, and sacrilegious speech, and gender norms, that increase our cultural separation from some of the immigrants we’re welcoming. The observant Muslim parents of a teenage girl in 1950s Toronto might have worried about their daughter being picked on because of her headscarf, but they wouldn’t have had to worry about her being exposed to Snapchat or Keeping Up With the Kardashians or the new Ontario sex ed curriculum.

People who demonize conservative immigration skeptics like Mark Steyn and Steve Sailer as racists and Islamophobes and so forth tend not to actually read what they write, so it doesn’t register that their skepticism might be rooted in a concern for the fragility of our common liberal values – basic things like freedom of speech, religious toleration, and the right of uncovered women to go for a walk without getting harassed. Perhaps their paranoia is overheated, but at least it acknowledges that integration works both ways. The Irish didn’t just come to America and become more American; America became more Irish. And the same will happen with today’s immigrants.

Maybe we’re cool with that, or maybe we’re just confident that the changes in “us” will all be for whatever we define as the better. But in the long run, it hardly matters what we think. The citizens of the future will uncritically adapt to the culture we bequeath them, and find arguments like this one as unfathomable as we find the frettings of the Know-Nothings.


I went off on a bit of a tangent there – I didn’t set out intending to write about immigration, not exactly. What got me thinking about Scott Alexander’s Know-Nothing was this passage in Brave New World.

Early on we’re introduced to Helmholtz Watson, lecturer at the College of Emotional Engineering. Helmholtz is troubled by an inchoate sense that, despite the state of universal contentment society has achieved in the year 632 After Ford, something vital is missing. He tries to explain to a friend what he means:

He was silent, then, “You see,” he went on at last, “I’m pretty good at inventing phrases – you know, the sort of words that suddenly make you jump, almost as though you’d sat on a pin, they seem so new and exciting even though they’re about something hypnopaedically obvious. But that doesn’t seem enough. It’s not enough for the phrases to be good; what you make with them ought to be good too.”

“But your things are good, Helmholtz.”

“Oh, as far as they go.” Helmholtz shrugged his shoulders. “But they go such a little way. They aren’t important enough, somehow. I feel I could do something much more important. Yes, and more intense, more violent. But what? What is there more important to say?”

I shut the book and reflected how in every generation, people complain that things are getting worse – morals are deteriorating, the scope of personal freedom is shrinking, tastes are coarsening, the best and highest works of our culture gather dust while the mob lavishes praise on ephemera. Optimists point to the fact that pessimists have been tolling the same doleful themes since at least Plato’s time as proof that the pessimists can be safely ignored: According to those old farts we’ve been driving off a cliff for two and a half millennia. Yet here we still are!

I share their optimism much of the time. Indeed, here we are! We’ve got it pretty good! Food is cheap, yoga pants are amazing for all sorts of reasons, and it appears euthanasia-on-demand will win the race against my accelerating decrepitude. Go toll your bell somewhere else, Gloomy Gus!

But reading Helmholtz’s report from the distant future, it occurred to me that perhaps the Gloomy Guses have been right all along. Every one of them.

In every generation things are lost. Some of those things are deliberately buried, because manners change, and people will no longer put up with blackface dance routines or teen sex comedies where the boys spy on the girl’s locker room. Often, in an excess of scrupulousness, good stuff gets buried with the bad. But most of the good stuff isn’t even deliberately buried, it just gets left behind and forgotten. And the people who’ve forgotten it don’t even know what they’re missing.

You might say it’s nothing to worry about. Our culture keeps generating new stuff to replace what’s lost, and if that new stuff isn’t as good as the old stuff, that’s fine, the culture will just adjust its definition of quality and future folks won’t know the difference.

Assuming, that is, that the conditions enabling us to generate new stuff will always prevail. But what if they don’t? What if historical progress actually has an end point?

Brave New World illustrates one way we could put a stop to history: we could actually bio-engineer creativity out of the human race. Helmholtz Watson, with his vague urges toward individual expression, is an aberration in the world of 632 A.F. – a genetic mistake of a kind society is working to eliminate. Another hundred years of tweaking the mix in the test tubes, and socially destabilizing brooders like Helmholtz might be done away with entirely.

I wish I could say confidently that we’ll never elect to bio-engineer our humanity away like that. But even if the human race remains inwardly human, external conditions might impede our creativity. Overcrowding. Technological dependency. The sheer bulk of our past achievements has already made it impossible to be a generalist in the manner of Newton or Goethe or Ben Franklin; if you want to add anything significant to the corpus of cultural knowledge, you now have to specialize. We might reach a point where the number of ideas you have to know already in order to conceive a new idea is so immense that no human brain can handle it; we’ll have no choice but to turn the process of ideation over to computers. Even demoralizing reflections like this one, the fear that all the good ideas have already been thought up, might increasingly lead to torpor and civilizational paralysis.

In the worst case, humanity might go the way of the famous mouse utopia experiment at NIMH – mouse decadence, then mouse apathy, then mouse barbarism, then total population collapse. But I suspect we’ll settle instead into something not far removed from Aldous Huxley’s prophetic satire – maintained by robots, pacified by porn and marijuana, stimulating the atrophied remnants of our thymos with virtual status-seeking – unlocking special achievements in video games and the like. I mean, we in the West aren’t too far from that already, except that the robots haven’t taken quite all the jobs yet so some of us still have to work. And you know what, it’s not that bad. We can’t regret what we don’t know we’ve lost.

When the Know-Nothing time-traveller arrives on our doorstep, we’ll listen with raised eyebrows to his crazy harangue. “The arts? Philosophy? The struggle for distinction? Geez, it all sounds awful. Why don’t you go for a walk, old man, take a look around. You’ll see how much better we have things now.”


PS. I was re-reading Brave New World to celebrate the recent wrapping-up of my own novel on a similar theme. More about this soon…