True and original…or, why we write (or don’t).

For the last few years most of my intellectual fuel has been burned up in writing my first novel. But since I finished the novel this spring (apart from a few minor tweaks and rearrangements, still ongoing) I’ve had a hard time motivating myself to resume my old habit of blogging. Laziness is clearly a factor, but I don’t think I’m any lazier than I used to be – only more realistic about what I expect my writing to achieve.

Suppose we evaluate every piece of writing, existing or potential, on the following dimensions:

1) Truthfulness.
2) Originality.
3) Effort.

By “truthfulness” I mean, okay, yes, empirical truth: Did this happen in the real world? Can it be relied on, to a reasonable degree of statistical certainty, to happen again? But fiction can also be truthful, if its artifice reveals deep-down truths about humanity, social forces, whatever. When a character in a story does something no human would actually do, “It’s fiction!” isn’t a defense – unless the character is a robot or an alien – in which case “It’s science-fiction!” is a defense, provided that the robot’s or alien’s behaviour points to something true about our own reality. I guess what I’m saying is that truthfulness is a slippery and debatable concept – but that doesn’t free us to knowingly write lies.

“Originality” seems pretty straightforward. The older you get, the more you read, the more you start seeing the same old ideas, the same old arguments, coming around again and again. This doesn’t mean the old ideas or arguments are untrue. Sometimes people need to be reminded of things they already know, or used to know, or would have known if they hadn’t been misled by false-but-original intellectual fashions into believing daft things. Sometimes the old, commonplace ideas are so thoroughly forgotten that they become original again. Which is to say that originality is almost as slippery and debatable as truthfulness.

By “effort” I mean how much effort goes into writing. It’s easy to write something true if you’re not concerned about originality: Japan is a mountainous and densely-populated archipelago off the eastern coast of mainland Asia. It’s easy to write something original if you’re not concerned about truth: Japan was founded in 1949 by lobster people displaced by atomic testing in the Bikini Atoll. Writing something both true and original is exceptionally difficult. So difficult that very few writers ever manage it. To judge by results, most don’t even try.

Suppose you have an idea that’s true, but not very original. Should you go to the effort of writing it down? If you have a fair degree of certainty about its truthfulness, it’s probably worth sharing, if only to help increase the amount of truth in the world.

Alternatively, suppose you have an idea that may or may not be true, but you’re pretty sure no-one’s ever had before. In that case, again, it’s probably worth sharing, albeit at the risk of misleading people with what could be an untruth.

But if you’re only, say, fifty percent sure your idea is true, and fifty percent sure it’s original, is it really worth the effort to write down? Probably not.

One of the nice things about having a novel in progress is you can use it as an outlet for all the ideas you have that might not meet your standards for truthfulness and originality: Okay, I don’t believe this all that strongly, but it’s the kind of thing Katie (or Roland, or Helmut) might believe, so I’ll just rephrase it into that character’s voice and…and you look up from your computer screen and it’s 3 AM and, marvel of marvels, you’ve actually met your day’s quota. You can go to bed without hating yourself.

But a novel can’t be merely a repository for every half-baked idea you’re embarrassed to take responsibility for. Unless those ideas add up to a true and original whole, you’re better off saving yourself the effort.

Does my novel meet that standard? Probably not. But then, very few do.

M.

The medical men of Middlemarch.

There must be two dozen books on my shelves that I’ve never read, but recently, after coming across a couple references to how dauntingly unreadable Middlemarch is, I decided to verify my hazy impression that I’d found it absorbing from the start.

Maybe “absorbing” is the wrong word. Victorian novels demand sifting, extracting, unpacking. Many sentences need to be double-read: once through to sort out how the clauses relate to each other and again to determine how they relate to the story. You’d think I’d find it tedious. I’m not enchanted with complexity for its own sake. My eyelids tend to droop when I read poetry, for instance, even stuff I know I should admire, like Shakespeare. George Eliot begins each chapter with an epigraph, usually poetical; I skim them. But the story is interesting enough that I don’t mind unravelling the prose when it gets knotty. Clive James once disparaged another literary pretzel-twister, Edward Gibbon, for “the kind of stylistic difficulty which leads its admirers to admire themselves, for submitting to the punishment.” Perhaps liking Middlemarch is a kind of masochism.

The other day, awaiting the inevitable callback from my garage to upsell me from a routine oil-and-lube to major repairs, I found myself wondering why mechanics can’t operate the way Mr. Lydgate does in Middlemarch. I know that sounds unbearably pretentious but it’s what I was thinking.

Most readers remember Middlemarch for the thwarted romance of widowed Dorothea Casaubon and the passionate but aimless Will Ladislaw. Mr. Lydgate is the hero of what a screenwriter would call the “B-plot”; to quote the rear cover copy on my Signet Classic paperback, Lydgate is “an ambitious young doctor who is betrayed by his wife’s egoism and his own inner weakness.” The rather haughty surgeon-apothecary, newly arrived in Middlemarch, offends local custom by acting on the principle that a doctor should “simply prescribe, without dispensing drugs or taking percentage from druggists.” He explains that,

it must lower the character of practitioners and be a constant injury to the public if their only mode of getting paid for their work was by their making out long bills for drafts, boluses, and mixtures.

This explanation gets rather muddled in third-hand transmission to a competitor:

The next day Mr. Gambit was told that Lydgate went about saying physic was of no use.

“Indeed!” said he, lifting his eyebrows with cautious surprise. (He was a stout, husky man with a large ring on his fourth finger.) “How will he cure his patients, then?”

“That is what I say,” returned Mrs. Mawmsey, who habitually gave weight to her speech by loading her pronouns. “Does he suppose that people will pay him only to come and sit with them and go away again?”

This business about Lydgate and his rivalry with the town’s other “practitioners” is one of those subtle questions of class and custom that gets lost on the modern reader. On first reading Middlemarch I failed to notice that Lydgate is referred to as “Mr.”, never as “Dr.” The latter honorific is reserved to those, like the town physicians, Dr. Minchin and Dr. Sprague, who have “been to either of the English universities and enjoyed the absence of anatomical and bedside study there”. In other words they have been more expensively though not more comprehensively educated. Mr. Lydgate, by contrast, after his apprenticeship to a country apothecary, has studied at Edinburgh, Paris, and London, there picking up numerous progressive and unsettling ideas.

Middlemarch is set just before and after the accession of William IV in 1830, a time of much reformist ferment. A decade and a half earlier, Parliament had made a stab at straightening out the chaotic system of medical accreditation which then prevailed in the United Kingdom. As S.W.F. Holloway explained in the July 1966 issue of the journal Medical History (“The Apothecaries’ Act, 1815: A Reinterpretation: Part II“) , the new system effectively defined nearly all medical practitioners as apothecaries, and regulated them as such. Traditionally apothecaries had filled a role roughly analogous to pharmacists today, but the lines between the different classes of medical practitioners had become blurred. As Holloway quotes a contemporary source:

In London, and some of our other great towns, there are physicians and surgeons who do not compound or vend medicines; but in the country this distinction of the three branches of the profession does not exist. Except in a few of our largest towns, every man who practises medicine at all, likewise deals in drugs, and must do so … If he were not to supply [patients] with medicines, there is nobody else from whom they could procure them. The consequence is … that over all England the medical practitioners are also apothecaries, within the meaning of this act.

Physicians were an exalted class who could afford to forgo the unseemly necessity of seeking licensure as apothecaries, which required a five-year apprenticeship as an apothecary. Men of substance who could afford a degree from Oxford or Cambridge, physicians attended the sickbeds of the titled and propertied; the customary fee for a consultation was one guinea. All other medical men, known inclusively as general practitioners, were traditionally forbidden to charge an attendance fee. Their sole source of income was the “drafts, boluses, and mixtures” they peddled. As Holloway explains:

This system led not only to [the general practitioner] being considered a tradesman in an age when trade was regarded as a debased occupation: it also exposed him to the accusation of over-charging and over-prescribing. The apothecary, it was said in 1703, “makes the deluded Patient pay very extravagant Fees by the intolerable Prices he puts on all the cheap Medicines, and by passing upon him very many more Doses than the Disease requires or the Constitution can bear”.

(You can see why my mind ran to Lydgate as I sat awaiting the call from my mechanic, to pass upon me a Dose my Constitution could not bear.)

By charging for doctoring and not for drugs, Lydgate is offensive not only to the physicians on whose exclusive prerogative he is trespassing, but to his fellow general practitioners Mr. Wrench and Mr. Toller, to whom he appears to be trying to overreach his station:

“I say the most ungentlemanly trick a man can be guilty of is to come among the members of his profession with innovations which are a libel on their time-honoured procedure. That is my opinion, and I am ready to maintain it against anyone who contradicts me.”

“My dear fellow,” said Mr. Toller, striking in pacifically and looking at Mr. Wrench, “the physicians have their toes trodden on more than we have. If you come to dignity it is a question for Minchin and Sprague.”

“Does medical jurisprudence provide nothing against these infringements?” said Mr. Hackbutt with a disinterested desire to offer his lights. “How does the law stand, eh, Hawley?”

“Nothing to be done there,” said Mr. Hawley. “I looked into it for Sprague. You’d only break your nose against a damned judge’s decision.”

What decision is this? Holloway again:

The first step came in 1829 when Chief Justice Best, in Towne v. Gresley, held that an apothecary might charge for his attendance, provided he made no charge for the medicines furnished. But in the following year Lord Tenterden ruled that an apothecary might recover for reasonable attendance as well as for medicines.

Per this judgement, there’s nothing stopping Mr. Lydgate from charging a consulting fee and also pushing lucrative potions on his patients. But he refrains as a matter of principle.

Perhaps an idealistic thinker of the Lydgate type will one day reform the automotive repair industry so that garages are no longer incentivized, as apothecaries once were, to over-prescribe service. A consulting mechanic would examine our car and determine which fluids really needed flushing, which gaskets really needed replacing, then write out a prescription which we’d take to a practicing mechanic up the road, who’d actually carry out the repairs. I’m sure the first such practitioner would arouse much resentment and resistance among his fellow tradespeople. It would make good drama for a novel. Not the main story, probably. A B-plot.

M.

Do crosswalk timers cause more collisions, and if so, why?

Note: Here’s the fourth in a trove of unpublished blog posts I salvaged from an old laptop. The news story referenced is now a few years out of date, but it’s still interesting…at least to me.

crosswalk timer rear-end collision

In the Vancouver Sun a few weeks ago (April 2013) there was an article about how crosswalk timers – the walk signals at many intersections that count down the number of seconds till the light turns yellow – reduce collisions between vehicles and pedestrians, but increase collisions between vehicles.

Arvind Magesan, the University of Calgary researcher who discovered this phenomenon, theorizes that the extra accidents are caused by drivers speeding up to beat the red light:

“If a road is really busy and it’s slow-moving traffic, you can’t really use a countdown to decide to drive faster to get through a light. In places like that, it seems to have a positive effect. It reduces the rates of accidents,” Magesan said Tuesday.

“In places where a driver does have the opportunity to react to this thing – which they are not supposed to be reacting to in the first place – they use the information and accelerate,” Magesan said.

I never realized that drivers “are not supposed to be reacting to” these timers. When I took driver training, not so very many years ago, the instructor explicitly advised us to keep an eye on crosswalk signals to get a sense of when the light was about to change. The idea was that when we saw the blinking red hand, we should be prepared to slow down. But a more reckless driver will obviously take the red hand as a cue to make haste through the intersection.

It’s not only crosswalk signals that are interpreted in ways not intended by the designers. Recall that scene in the movie Starman, where Jeff Bridges’ stranded alien, having learned to drive by observing Karen Allen, interprets the meaning of the traffic signals as “red light, stop – green light, go – yellow light, go very fast.”

A crosslight timer is just a more explicit blinking red hand, which in turn is a kind of yellow light advance-warning. They all serve the same purpose; to tell us roughly how long until the light turns red. What we do with that information is up to us. Reckless drivers use it one way, cautious drivers another.

I was skeptical of Arvind Magesan’s crosswalk timer study, so I tracked it down online. Co-authored with Sacha Kapoor, the study is disguised under the unrevealing title Paging Inspector Sands: The Costs of Public Information. In supporting their findings, Magesan and Kapoor assert (citing earlier research) that “providing drivers with information about the time until a light change causes drivers to approach traffic lights more aggressively on average.” That on average is key. Some of us use the information the way my driving instructor wanted us to – to give us a little extra warning that we need to slow down. Others use the information to determine how much they need to speed up to avoid wasting time at a red light.

I would have thought the two effects would balance out – cautious drivers would avoid crashes by slowing down, reckless drivers would get into more crashes by speeding up, and the net effect would be a wash. But when you think about it, it makes sense that accidents would increase even if crosswalk timers didn’t, as the authors claim, make drivers more aggressive on average. Because you need both a reckless and a cautious driver to create the conditions for the kind of crash they describe.

Let me explain. After I read the news story, but before I read the study, I assumed that the higher accident rate was caused by vehicles speeding through yellow lights and colliding head-on with vehicles attempting to turn left. This was consistent with Magesan’s assertion that it was at less busy intersections where the increase in accidents occurred. At busy, slow-moving intersections, vehicles are unable to work up a head of steam, so the crosswalk timer makes no difference. But at less busy intersections, drivers who see the timer from fifty or a hundred yards out might stomp on the gas to try and beat the light. Since less busy intersections tend not to get designated left turn signals, I reasoned, there are more likely to be vehicles waiting there to turn left on yellow, hence, more opportunities for smashes.

My assumption was wrong. The authors broke down the accidents by type, and it seems that collisions involving a “turning movement” increased only negligibly when the crosswalk timers were installed. The greatest increase was in rear-end collisions. What’s more, speeding wasn’t the major cause of the accidents. Tailgating was.

Consider that, in any given pair of vehicles approaching an intersection, there are four possible combinations:

1. Cautious driver following cautious driver.
2. Cautious driver following reckless driver.
3. Reckless driver following reckless driver.
4. Reckless driver following cautious driver.

In scenarios 1 and 2, the crosswalk timer ought to help cautious drivers more accurately gauge when to start slowing down. This can be useful, especially in slippery winter conditions (the study was conducted in Toronto) where braking distance might be three or four times longer than usual. You’d think some number of rear-end collisions would be avoided here.

In scenario 3, where a reckless driver follows a reckless driver, they should both zoom safely through the intersection.

The typical crosswalk timer-induced accident involves scenario 4, where a reckless driver follows too closely behind a cautious driver. The reckless driver sees the timer and concludes that if he sticks closely to the vehicle ahead, he can sneak through the light just as it changes. He reasons that if he can make it through the intersection on time, the driver ahead, who obviously can see the timer just as well as he can, has even less reason to stop. What he doesn’t realize is that the cautious driver ahead isn’t interpreting the signal the same way. The cautious driver hits the brakes just as the reckless driver hits the gas.

In this case, the added information provided by the crosswalk timer is making things worse by facilitating both the recklessness of the reckless driver and the caution of the cautious driver. The accident would have been avoided if the reckless driver had slowed down or if the cautious driver had maintained his speed.

Whatever behavioural mechanism is at work, it appears that offering too much information can in fact make traffic signals more dangerous. Does it follow that less information makes signals less dangerous? According to Wikipedia, the very first modern traffic light included a warning that the light was about to change – a buzzer, back then, rather than a yellow light. How would drivers behave at an intersection where there was no warning at all – no buzzer, no yellow light, just red and green, stop and go? It might lead to more running of red lights. Or it might make everyone more cautious as they approached an intersection, for precisely that reason.

If it turns out having some warning is in fact safer than having none, what’s the sweet spot, safety-wise, between not enough warning and too much? And what’s the tradeoff between that optimally safe arrangement and the optimally efficient flow of traffic? After all, the safest traffic condition is probably gridlock, where no-one can build up enough speed to hurt anybody else.

Recently a few blogs have featured this video of Poynton, England, where they removed all traffic signals from the central intersection. In order to navigate the new uncontrolled double-roundabout safely, drivers are forced to actually pay attention to pedestrians and their fellow drivers. The result, apparently, is a much safer and smoother flow of traffic than you might expect.

Predictably, most of the discussion of this innovation has occurred on crunchy-leaning urban design blogs, and is uniformly uncritical. On some forums you can find a few skeptical comments from cyclists and drivers who claim that the new configuration is simply shifting traffic problems to routes that miss the town. (Not that this is necessarily a bad thing.)

Me, I’m agnostic. I’m glad planners are trying out new ideas, but I wonder if the current mania for walkability, shared roadways, and related New Urbanist tropes will in the long run suffer a backlash just as the previous mania for Brutalist skyscrapers, vast concrete plazas, and neighbourhood-gutting freeways did. My impression of that bizarre double-roundabout in the Poynton video is that it looks pleasant enough for pedestrians, but as a driver I’d gladly go a few miles out of my way to avoid it. But then, I’m a North American, and we’re famously flummoxed by roundabouts.

Incidentally, Magesan and Kapoor’s policy recommendation is that crosswalk timers be replaced with audible countdowns that only pedestrians, and not drivers, can hear. If I were a policymaker I’d be reluctant to act on this until the study has been replicated a few times by other researchers in other places. Since many cities seem to be installing these timers, there should be plenty of data available.

As a fairly cautious driver, I’d prefer to keep the visible timers – I find them useful for regulating my speed when conditions are slippery, as they so often are in Canada. I suppose I’ll feel differently if I’m ever rear-ended at one of these intersections.

M.

 

Brexit: Diff’rent yokes for diff’rent folks.

As a Canadian, I can’t say I was terribly invested in the question of whether the United Kingdom should or shouldn’t leave the European Union. I suppose my slight preference was for a Leave vote, just as my slight preference in the 2014 Scottish referendum was for independence, just as I was pretty sanguine about the prospect of Quebec’s departure during that province’s referendum campaign way back in 1995. I have a sentimental streak of small-is-beautiful conservatism which, if I were English, might lead people to dismiss me as a Little Englander. I’m not altogether clear on why that’s considered a slur.

Up until a few years ago – say, 2007 – I never really questioned the value of EU membership for Britain. The great convenience of free trade and travel around the continent more than compensated, I would have said, for the pin-pricking of petty rules dreamed up by bureaucrats in Brussels.

Then the subprime mortgage crisis happened, leading in 2010 to the Greek debt perma-crisis, now entering its seventh year. Don’t ask me to explain how a real estate bubble in the United States led to the near-default of Greece, threatening to tank the entire European economy. I’m sure you’re a diligent reader of the financial news and you can explain how it’s all the fault of a) hyperregulatory welfare statism or b) the machinations of predatory bankers (pick one).

I’m not a particularly intelligent person, but neither am I outrageously dumb. Let’s say I’m at the 50th percentile, IQ-wise. And while I’m not as well-informed as I know I should be, I do make some effort to keep abreast of the news. If it helps you to place my level of financial literacy, I read the book version of The Big Short a few years before the movie came out – in other words, I’m able to follow business reportage at an airport-bookstore level – putting me, once again, at let’s say the 50th percentile. And after reading nearly a decade of argumentation about the causes of the economic clusterhump we’re still crawling our way out of, I still have no goddamned clue what happened. And about half the population has less of a clue than I have.

That’s ultimately why I’m wary of the EU, and NAFTA, and the WTO, and any other arrangement that erodes national sovereignty. It’s not that I think Greece, or Quebec, or the UK are any less likely on their own to blindly drive into the quicksand than they would be as provinces of a vast multiethnic empire. But as long as national borders exist, national catastrophes are to some degree localized. The effects are concentrated among people with the same cultural values, speaking the same language, using the same currency. Which means it might be possible, barely, for humans of limited intellect to wrap their heads around the causes of the catastrophe and take steps to contain it. And for the equally slow-witted citizens of nearby countries to learn from their neighbours’ errors before the same catastrophe overwhelms them.

If you have libertarian or neo-liberal or internationalist leanings you’re probably rolling your eyes at this retrograde stuff. Computers, the internet, global satellite communications – these things, you say, can’t be un-invented. We’re stuck in this borderless, speed-of-light reality, like it or no. The best thing is to set up a system of supra-national governance where the world’s brightest minds will be empowered to head off crises before they happen.

I guess it’s a matter of outlook. Down here at the 50th percentile I don’t have much confidence that the bright minds at the 99th percentile are as competent to foresee crises as they seem to think they are. In fact, reviewing the history of the last decade and a half or so, what I see is a succession of crises caused or at least exacerbated by the reckless utopianism of 99th-percentile types. The reckless utopians of the so-called right who were confident they could “drain the swamp” of Middle Eastern medievalism. And the reckless utopians of the so-called left who thought the symbolic importance of a common currency outweighed the dangers of duct-taping together a group of incongruous European economies.

It’s true that voters are at least partially to blame for elevating their leaders to positions where they could do such damage. But it’s one thing to elect a nitwit who near wrecks your own country. The most frustrating thing in the world must be to find yourself under the yoke of some nitwit you never even voted for, whom you believe – fairly or not – to be wrecking your country from the outside.

I’ve never really experienced that. But I can imagine how it must have felt for Scots who were powerless to thwart Margaret Thatcher’s reforms, or Quebecers offended by Pierre Trudeau’s constitutional swashbuckling, or Brits waking up one day to discover that Angela Merkel had unilaterally abrogated the rules governing asylum seekers in Europe.

Utopianism is the belief that for any problem there is one right, perfect, universal solution which, once it’s imposed, will quiet all critics and bring about perfect contentment forever. A more modest problem-solver says no, in fact, there are any number of potential solutions, each of them involving tradeoffs that will be more satisfactory to some groups, less so to others. Dissatisfied groups cluster together to air their grievances and wind up forming communities with shared values and assumptions. This has been happening for as long as humans have existed – in fact, it’s why there are different countries. It follows that different solutions are likely to be more satisfactory for some countries than for others. Why not, therefore, let different countries solve their problems in their own different ways?

M.

 

The Know-Nothing.

If I had to choose a passage to introduce you to Scott Alexander’s terrific blog Slate Star Codex, this isn’t the one I’d go with. But it happens to be one I want to riff on, so here it is…

Imagine a space-time rift brings a 19th-century Know-Nothing to your doorstep. He starts debating you on the relative merits and costs of allowing Irish people to mix with the rest of American society. And you have a hard time even getting the energy to debate him. You’re like “Yeah, there are some Irish people around. I think my boss might be half-Irish or something, although I’m not sure. So what?” And he just sputters “But…but…Irish people! It’s not right for Irish and non-Irish people to mix! Everyone knows that!” And not only do you not think that Irish people are a Big Deal, but you’re about 99% sure that after the Know-Nothing spends a couple of months in 21st-century America he’s going to forget about the whole Irish thing too. There’s just no way someone seeing how boring and ordinary Irish-Americans are could continue to consider worrying about it a remotely good use of their time.

The rest of this old post (from 2013) has nothing to do with the Irish. Alexander is a practitioner of polyamory, you see, which is some kind of modern offshoot of what used to be called free love, and he’s making a point about how unthreatening polyamory is, once you get to know the people who practice it. That subject doesn’t interest me at all – I endorse wholeheartedly his title (if not necessarily his argument): Polyamory Is Boring. But his analogy got me wondering. Would the Know-Nothing really come around as easily as Alexander imagines?

Let me extend the scenario. After your fruitless conversation with the time traveller, you part ways. A few months later, after he’s had time to settle in, read the newspapers, catch some TV, strike up conversations with cab drivers and strangers in bars, you run into him again. “Well, what do you think now?” you say. “The Irish aren’t so scary, are they?”

He shakes his head sadly. “You poor fool,” he says. “Everything we warned you about has come true. Irishness has completely overwhelmed the country. It surrounds you. And you can’t even see it.”

Of course, you ask the Know-Nothing to elaborate. But here my imagination fails – I have no idea what he’s observed in the intervening weeks to make him so depressed. I, like you, grew up in a culture so marinated in Irishness that its effects are totally invisible to me.

If you or I were to shimmer across the invisible space-time boundary that separates us from the alternate-history 2016 where the Know-Nothings successfully kept out the Irish, who knows what we’d find. I suspect we wouldn’t much care for the place. We’d find it stuffy, and exclusionary, and most importantly, in some indefinable way, insufficiently Irish.

But the fact that we prefer having been brought up in our own universe doesn’t mean that our side’s arguments (I mean, the arguments of the 19th century folks who took what we interpret to be “our side” in this long-dead dispute) were correct.

It just means our side won.

***

A few years back, in a post about cratering American birthrates (which I somehow tied in with a discussion of Robert Heinlein’s 1950 sci-fi novel Farmer in the Sky), I wrote that

If America wants to stay productive, it’s hard to see how it (and other developed countries in the same demographic boat, like Canada) can avoid taking in more newcomers.

I then went on for a few paragraphs about the downsides of large-scale immigration – problems of assimilation, mainly. But, I brightly concluded,

Eventually, most likely, the West will absorb and be fortified by the immigrant wave, as it has previous waves.

Recently I re-read that passage and I thought – wait, what? Do I have any empirical reason for believing that we will be “fortified” by new immigrants? What does that even mean?

I suppose I was making the same assumptions that underlie Scott Alexander’s parable of the time traveller. Strength in diversity! A nation of immigrants! The cultural mosaic! Irish, Ukrainians, Jews, Chinese – they’ve all successfully integrated, so why shouldn’t the next batch?

Only…if I were to extend the above list of immigrant ethnicities I would pretty quickly arrive at a few that have, as yet, integrated noticeably less well. (Depending where you live, you probably have a different unsuccessfully-integrated group in mind.) Maybe these groups aren’t to blame for their exclusion; maybe they’ve been discriminated against by the native-born. Maybe “integration” isn’t even a desirable goal. I’m not interested in arguing those points right now. I only mean there are differences between Irish immigration in the 1850s and Jewish immigration in the 1910s and (say) Syrian immigration in the 2010s. Differences in “them”, obviously, but just as importantly, differences in “us” – how many of us there are, what kinds of communities we live in, what jobs are available, and perhaps most of all, what we believe.

Some of those differences should make integration less painful. We’re certainly less overtly racist than we used to be, and we pay lip-service (sometimes without knowing exactly what we mean) to tolerance and diversity and so forth. On the other hand, we’ve adopted views on things like public displays of sexuality, and sacrilegious speech, and gender norms, that increase our cultural separation from some of the immigrants we’re welcoming. The observant Muslim parents of a teenage girl in 1950s Toronto might have worried about their daughter being picked on because of her headscarf, but they wouldn’t have had to worry about her being exposed to Snapchat or Keeping Up With the Kardashians or the new Ontario sex ed curriculum.

People who demonize conservative immigration skeptics like Mark Steyn and Steve Sailer as racists and Islamophobes and so forth tend not to actually read what they write, so it doesn’t register that their skepticism might be rooted in a concern for the fragility of our common liberal values – basic things like freedom of speech, religious toleration, and the right of uncovered women to go for a walk without getting harassed. Perhaps their paranoia is overheated, but at least it acknowledges that integration works both ways. The Irish didn’t just come to America and become more American; America became more Irish. And the same will happen with today’s immigrants.

Maybe we’re cool with that, or maybe we’re just confident that the changes in “us” will all be for whatever we define as the better. But in the long run, it hardly matters what we think. The citizens of the future will uncritically adapt to the culture we bequeath them, and find arguments like this one as unfathomable as we find the frettings of the Know-Nothings.

***

I went off on a bit of a tangent there – I didn’t set out intending to write about immigration, not exactly. What got me thinking about Scott Alexander’s Know-Nothing was this passage in Brave New World.

Early on we’re introduced to Helmholtz Watson, lecturer at the College of Emotional Engineering. Helmholtz is troubled by an inchoate sense that, despite the state of universal contentment society has achieved in the year 632 After Ford, something vital is missing. He tries to explain to a friend what he means:

He was silent, then, “You see,” he went on at last, “I’m pretty good at inventing phrases – you know, the sort of words that suddenly make you jump, almost as though you’d sat on a pin, they seem so new and exciting even though they’re about something hypnopaedically obvious. But that doesn’t seem enough. It’s not enough for the phrases to be good; what you make with them ought to be good too.”

“But your things are good, Helmholtz.”

“Oh, as far as they go.” Helmholtz shrugged his shoulders. “But they go such a little way. They aren’t important enough, somehow. I feel I could do something much more important. Yes, and more intense, more violent. But what? What is there more important to say?”

I shut the book and reflected how in every generation, people complain that things are getting worse – morals are deteriorating, the scope of personal freedom is shrinking, tastes are coarsening, the best and highest works of our culture gather dust while the mob lavishes praise on ephemera. Optimists point to the fact that pessimists have been tolling the same doleful themes since at least Plato’s time as proof that the pessimists can be safely ignored: According to those old farts we’ve been driving off a cliff for two and a half millennia. Yet here we still are!

I share their optimism much of the time. Indeed, here we are! We’ve got it pretty good! Food is cheap, yoga pants are amazing for all sorts of reasons, and it appears euthanasia-on-demand will win the race against my accelerating decrepitude. Go toll your bell somewhere else, Gloomy Gus!

But reading Helmholtz’s report from the distant future, it occurred to me that perhaps the Gloomy Guses have been right all along. Every one of them.

In every generation things are lost. Some of those things are deliberately buried, because manners change, and people will no longer put up with blackface dance routines or teen sex comedies where the boys spy on the girl’s locker room. Often, in an excess of scrupulousness, good stuff gets buried with the bad. But most of the good stuff isn’t even deliberately buried, it just gets left behind and forgotten. And the people who’ve forgotten it don’t even know what they’re missing.

You might say it’s nothing to worry about. Our culture keeps generating new stuff to replace what’s lost, and if that new stuff isn’t as good as the old stuff, that’s fine, the culture will just adjust its definition of quality and future folks won’t know the difference.

Assuming, that is, that the conditions enabling us to generate new stuff will always prevail. But what if they don’t? What if historical progress actually has an end point?

Brave New World illustrates one way we could put a stop to history: we could actually bio-engineer creativity out of the human race. Helmholtz Watson, with his vague urges toward individual expression, is an aberration in the world of 632 A.F. – a genetic mistake of a kind society is working to eliminate. Another hundred years of tweaking the mix in the test tubes, and socially destabilizing brooders like Helmholtz might be done away with entirely.

I wish I could say confidently that we’ll never elect to bio-engineer our humanity away like that. But even if the human race remains inwardly human, external conditions might impede our creativity. Overcrowding. Technological dependency. The sheer bulk of our past achievements has already made it impossible to be a generalist in the manner of Newton or Goethe or Ben Franklin; if you want to add anything significant to the corpus of cultural knowledge, you now have to specialize. We might reach a point where the number of ideas you have to know already in order to conceive a new idea is so immense that no human brain can handle it; we’ll have no choice but to turn the process of ideation over to computers. Even demoralizing reflections like this one, the fear that all the good ideas have already been thought up, might increasingly lead to torpor and civilizational paralysis.

In the worst case, humanity might go the way of the famous mouse utopia experiment at NIMH – mouse decadence, then mouse apathy, then mouse barbarism, then total population collapse. But I suspect we’ll settle instead into something not far removed from Aldous Huxley’s prophetic satire – maintained by robots, pacified by porn and marijuana, stimulating the atrophied remnants of our thymos with virtual status-seeking – unlocking special achievements in video games and the like. I mean, we in the West aren’t too far from that already, except that the robots haven’t taken quite all the jobs yet so some of us still have to work. And you know what, it’s not that bad. We can’t regret what we don’t know we’ve lost.

When the Know-Nothing time-traveller arrives on our doorstep, we’ll listen with raised eyebrows to his crazy harangue. “The arts? Philosophy? The struggle for distinction? Geez, it all sounds awful. Why don’t you go for a walk, old man, take a look around. You’ll see how much better we have things now.”

M.

PS. I was re-reading Brave New World to celebrate the recent wrapping-up of my own novel on a similar theme. More about this soon…

 

The old, illogical morality: The Kindly Ones and Darkness at Noon.

Note: This is the third in a cache of old abandoned blog posts I recently recovered from a rarely-used laptop. The “project” I allude to below is the novel I’m currently wrapping up, about which more soon.

In preparation for a project I’m thinking of attempting, I’ve been doing some research on life behind the Iron Curtain. To this end I was recently reading Anne McElvoy’s The Saddled Cow: East Germany’s Life and Legacy, in which she interviews Wolfgang Leonhard, a “former comrade” of longtime East German ruler Erich Honecker. Leonhard recalls of the leader-to-be:

He had the main characteristic I would consider essential for success as a young functionary: absolutely average intelligence. In a communist party on the Stalinist model, you have to have a good memory and an ability to absorb reams of resolutions and turn them into directives, so you need a certain basic intelligence. You can’t be plain dumb, as was required under the Nazis, because the ideology is much more complicated. But you can’t be too intelligent, because people of above-average intellect have a tendency to challenge the arcana, to spot its flaws, which makes them disobedient.

Did the Nazis require their members to be “plain dumb”? To some degree we must defer to the old comrade’s experience. As a youth in the Third Reich, Leonhard must have met many Nazis, and maybe they were on the whole dumber than his Communist acquaintances – although one doubts his impartiality. Certainly Nazism and its Fascist sister-governments had their share of intelligent sympathizers, from Martin Heidegger to Robert Brasillach to Ezra Pound; and I suspect if those governments had remained on the scene longer, they would eventually have accumulated a body of Western intellectual fellow-travellers like those that forgave and justified all Communism’s “mistakes” and “excesses”. But it’s hard to say.

Leonhard’s comment brought to mind a scene in The Kindly Ones, Jonathan Littell’s disturbing novel of World War II. Our narrator and “hero”, the intelligent and cultured SS officer Dr. Maximilian Aue, as punishment for having displeased his superior, is transferred to Stalingrad just as the Germans are losing control of that city to the Soviet counterattack. There, amid the rubble and sickness and squalor, he interviews a captured enemy politruk – a Communist Party member assigned to a Soviet army unit to build morale and ensure obedience to the party line. Their conversation runs for several pages and makes a useful crib sheet on the differences and similarities between the two totalitarianisms. Here’s how the politruk sums it up:

“[O]ur ideologies have this basic thing in common, which is that they are both essentially deterministic; racial determinism for you, economic determinism for us, but determinism all the same. We both believe that man doesn’t freely choose his fate, but that it is imposed on him by nature or history. And we both draw the conclusion that objective enemies exist, that certain categories of human beings can and must legitimately be eliminated not for what they’ve done or even thought, but for what they are. In that, we differ only in the definition of the categories: for you, the Jews, the Gypsies, the Poles, and even, I believe, the mentally ill; for us, the Kulaks, the bourgeois, the Party deviationists. At bottom, it’s the same thing; we both reject the homo economicus of the capitalists, the egotistical, individualistic man trapped in his illusion of freedom, in favor of a homo faber: Not a self-made man but a made man, you might say in English, or a man yet to be made, since communist man must still be constructed, educated, just like your perfect National Socialist. And this man-to-be-made justifies the pitiless liquidation of everything that is uneducable, and thus justifies the NKVD and the Gestapo, gardeners of the social body, who tear out the weeds and force the good plants to follow their stakes.”

This politruk, like Aue, has been sent to the front after falling out of favour with his superiors. He bears a passing resemblance to Rubashov, the main character in Arthur Koestler’s Darkness at Noon, a high-ranking commissar and veteran of the Revolution who is imprisoned on trumped-up charges and tried as a “Party deviationist”. In his diary Rubashov writes:

We [Communists] have learnt history more thoroughly than the others. We differ from all others in our logical consistency. We know that virtue does not matter to history, and that crimes remain unpunished; but that every error has its consequences and venges itself unto the seventh generation. Therefore we concentrated all our efforts on preventing error and destroying the very seeds of it. Never in history has so much power over the future of humanity been concentrated in so few hands as in our case. Each wrong idea we follow is a crime committed against future generations. Therefore we have to punish wrong ideas as others punish crimes: with death.

History put me where I stood; I have exhausted the credit which she accorded me; if I was right I have nothing to repent of, if wrong, I will pay.

Following this logic, Rubashov convinces himself of the historical necessity of his own annihilation. He willingly confesses to the absurd charges against him and abases himself at his show trial.

Just as Darkness at Noon illustrates the thought processes by which an intelligent man can arrive at the conclusion that his own life must be sacrificed to the vaunted triumph of the Classless Society, The Kindly Ones shows how an intelligent man can convince himself of the necessity of exterminating whole ethnicities deemed inconvenient to the security of the state. At one point Dr. Aue accepts an invitation to dinner at Adolf Eichmann’s apartment and finds himself instructing his host on the finer points of their shared ideology – specifically, how it can be reconciled with Kant’s categorical imperative. (At his 1961 trial in Israel, Eichmann would arouse indignation by proclaiming, as Hannah Arendt recounts in Eichmann in Jerusalem,

that he had lived his whole life according to Kant’s moral precepts, and especially according to a Kantian definition of duty.

Arendt expresses surprise that Eichmann, questioned by a judge on this point, is able to supply “an approximately correct definition of the categorical imperative”.)

At his dinner party, Eichmann listens eagerly as his educated guest explains how Kant’s philosophy of individual will can be reconciled with the Führerprinzip, the principle that in the Third Reich “the Führer’s words have the force of law”:

“You have to live out your National Socialism by living your own will as if it were the Führer’s … Whoever only obeys orders like an automaton, without examining them critically to penetrate their inner necessity, does not work closer to the Führer; most of the time, he distances himself from him. … All law must rest on a foundation. Historically, this has always been a fiction or an abstraction – God, the King, or the People. Our great advance has been to base the legal concept of the Nation on something concrete and inalienable: the Volk, whose collective will is expressed by the Führer who represents it. When you say Frei sein ist Knecht sein [To be free is to be a vassal], you have to understand that the foremost vassal of all is precisely the Führer, since he is nothing but pure service. We are not serving the Führer as such, but as the representative of the Volk, we serve the Volk and must serve it as the Führer serves it, with total abnegation. That’s why, confronted with painful tasks, we have to bow down, master our feelings, and carry them out with firmness.”

It’s possible that the mental convolutions necessary to overcoming the evident contradictions of Communism and National Socialism make those ideologies more appealing to intelligent people; it is precisely their affront to common sense that make them attractive to those, like Rubashov and Dr. Aue, who justly perceive themselves as uncommon. No particular genius is necessary to observe that mass murder is wrong. It takes a nimble mind to argue that the grand march of history dictates the necessity of submitting to this distasteful duty.

Rubashov, on the eve of his execution, begins to doubt the result to which his reasoning has led him:

For forty years he had lived strictly in accordance with the vows of his order, the Party. He had held to the rules of logical calculation. He had burnt the remains of the old, illogical morality from his consciousness with the acid of reason. … And where had it landed him? Premises of unimpeachable truth had led to a result which was completely absurd … Perhaps it was not suitable for a man to think every thought to its logical conclusion.

Perhaps not, but how are we to know when to abandon logic except by logically analyzing the problem? Some like to imagine there’s an invisible thread wound around our hearts that will, if we let it, guide us back to the light when logic leads us astray. Call this thread God, or conscience, or common humanity. But the history of the last century demonstrates that the thread, if it exists, is easy to sever, and that far from feeling lost without it, we gloat over our freedom.

M.

Over-noticing minorities.

Note: Instead of updating my blog, I’ve been busy writing my first novel, about which more soon. However, I recently rediscovered on a little-used laptop a cache of abandoned blog posts which I’ll be publishing over the next few days. Some, like this one, refer to news stories that are now a couple years out of date, but I hope you’ll still find them relevant. Here’s the second one…

I can’t remember what chain of links I followed, but the other day I wound up re-reading Ron Unz’s 2012 article The Myth of American Meritocracy in The American Conservative. Unz begins by setting out to prove that Ivy League and other elite universities’ admission policies supposedly designed to reflect America’s diversity have the perverse effect of discriminating against Asian applicants. But in the course of his very long (22,000 words) article he brings in numerous other examples of bias, corruption, and carpetbagging to illustrate his argument that current policies are “selecting future American elites which are not meritocratic nor diverse, neither being drawn from our most able students nor reasonably reflecting the general American population.”

Unz makes the case for replacing the current opaque and arbitrary admissions system with one where the majority of places are assigned by lottery. He calls his scheme “the Outer Ring and the Inner Ring”: the Outer Ring of (say) 80% of each year’s incoming class would be selected randomly from the huge pool of applicants who met basic academic standards, while the smaller Inner Ring would be admitted strictly on academic merit. Unz thinks this system would be fairer, lead to more genuine diversity, and as a bonus, by distributing talented students more evenly across the nation’s campuses, would deal a blow to “the sort of arrogance found among too many of today’s elite college graduates”.

I have no comment on the article, except to say that it seems reasonable and it holds up on a second reading. Incidentally, it’s essential background information for understanding a recent news item from California, where lawmakers in spring 2014 declined to revisit the state constitutional amendment (passed by ballot initiative in 1996) that forbids consideration of race in admissions to state universities. The Democratic majority, heavily dependent on minority voters, would like to bring back racial preferences to increase enrolment of blacks and Latinos, but Asians, another significant Democratic constituency, are worried any attempt to tweak the campus ethnic mix would come chiefly at their children’s expense. (At present Asians are hugely overrepresented in California state universities, relative to their share of the college-age population, while blacks and Hispanics are significantly underrepresented.)

University of California Freshman Enrollees By Race 2010

University of California Freshman Enrollees By Race 2010

So thanks to the lobbying of several Asian state senators, California’s current race-blind admissions policy will continue unchanged, at least for now.

Anyway, though all that is interesting, it’s not what I wanted to write about. I was struck by this sentence, buried down around the 18,000-word mark:

So perhaps many college administrators may have little idea about which ethnic groups are already enrolled above parity and which are below, instead taking their marching orders from an amorphous academic narrative which valorizes “racial diversity.”

To support this conjecture, Unz points to a 2001 Gallup poll that asked Americans to estimate what percentage of the country’s population was black and what percentage Hispanic. The correct answer at that time was 12.3% for blacks and 12.5% for Hispanics. (Since then the black percentage has stayed about the same, while Hispanics have crept up to around 17%.) How accurate were people’s guesses?

[S]lightly less than one in 10 Americans can accurately identify that the population of either blacks or Hispanics in this country falls between 10% and 14%. The typical American estimates the percentages of blacks and Hispanics in this country to be more than twice as high as they actually are.

On average, Americans say that 33% of the U.S. population is black. In fact, a majority of Americans (56%) estimate that the percentage of blacks in this country stands at 30% or higher. As many as 17% of Americans say the percentage of blacks is 50% or greater. Only 7% accurately state that the percentage of blacks falls between 10% and 14% of the entire population.

Americans’ impressions about the percentage of Hispanics in this country are somewhat more accurate … Americans, on average, say that 29% of the U.S. population is Hispanic.

This confusion wasn’t limited to the poor and ill-educated. Even among holders of postgraduate degrees, the mean estimate for the black population was 25%. A couple years ago Razib Khan looked at another survey (from the year 2000) with similar results. This survey included questions about Asians and Jews, whose numbers college graduates exaggerated by a factor of four.

To repeat, these polls were taken over a decade ago, but I can’t see any reason to believe people are better informed today. I was curious to see whether we were equally confused about the number of gays and lesbians in our midst, and it turns out, yup, Gallup asked that question too, back in 2011. The headline says it all: U.S. Adults Estimate That 25% of Americans Are Gay or Lesbian. Twenty-five percent! Of course defining who is or isn’t gay is even harder than drawing borders around the different races, but as Gallup points out, most statisticians put the number of gays, lesbians, and bisexuals together at under 5% of the population. Even in Alfred Kinsey’s famously loosey-goosey data set, gathered by more or less seeking out the kinkiest people he could find then asking them to send round all their kinkiest friends, only 10% of men were willing to admit they’d had a homosexual experience.

At first blush, you might expect this extraordinary evidence of our statistical ignorance to explain a lot about the way people think about race and sexual orientation. But on further reflection, it’s not obvious that it tells us much of anything, besides that people are terrible at counting.

To illustrate: All three of the surveys broke out the results by respondent’s political persuasion, showing that liberals and conservatives were about equally likely to overestimate the numbers of minorities. Conservatives tended to be a shade more realistic in their guesses, but there’s no way to tell if that’s because conservatives are better acquainted with demographics or just that they tend to live in districts where fewer minorities are around, mitigating their tendency to over-notice. Either way, it seems that liberals and conservatives, working from the same faulty assumptions, arrive at opposite policy conclusions – just as better-informed thinkers working from actual demographic data reach opposite conclusions about contentious issues around race and sexual identity.

For instance, these surveys explain why liberals find it so galling that straight white men continue to clog up Congress, corporate boardrooms, university faculties, news anchor desks, and movie screens. Take this article in the Hollywood Reporter a couple years back, which complains that “black actors have been losing ground. In the early 2000s, blacks played 15% of roles in film and TV. Today, it has fallen to 13%.” If you start from the assumption that a third or more of Americans are black, well, then yes, this modest decline does seem outrageous. If you recognize that blacks were in fact slightly overrepresented before the decline, the outrage is harder to sustain.

On the other hand, based on the same survey results I suppose it’s fair to condemn conservatives – most of whom, remember, also believe the inflated minority numbers – as insufficiently concerned over minority underrepresentation.

Liberals and conservatives alike (though in different contexts) evoke the name of Science, or Data, or The Facts, and imagine that their ideological opponents would come around if they were only exposed to them. But I doubt it would make much difference. People are quite adept at finding in the facts confirmation for whatever they already believe.

M.