You are viewing fare

eyes black and white

The Criminal Stupidity of Intelligent People

What always fascinates me when I meet a group of very intelligent people is the very elaborate bullshit that they believe in. The naive theory of intelligence I first posited when I was a kid was that intelligence is a tool to avoid false beliefs and find the truth. Surrounded by mediocre minds who held obviously absurd beliefs not only without the ability to coherently argue why they held these beliefs, but without the ability of even understanding basic arguments about them, I believed as a child that the vast amount of superstition and false beliefs in the world was due to people both being stupid and following the authority of insufficiently intelligent teachers and leaders. More intelligent people and people following more intelligent authorities would thus automatically hold better beliefs and avoid disproven superstitions. However, as a grown up, I got the opportunity to actually meet and mingle with a whole lot of intelligent people, including many whom I readily admit are vastly more intelligent than I am. And then I had to find that my naive theory of intelligence didn't hold water: intelligent people were just as prone as less intelligent people to believing in obviously absurd superstitions. Only their superstitions would be much more complex, elaborate, rich, and far reaching than an inferior mind's superstitions.

For instance, I remember a ride with an extremely intelligent and interesting man (RIP Bob Desmarets); he was describing his current pursuit, which struck me as a brilliant mathematical mind's version of mysticism: the difference was that instead of marveling at some trivial picture of an incarnate god like some lesser minds might have done, he was seeking some Ultimate Answer to the Universe in the branching structures of ever more complex algebras of numbers, real numbers, complex numbers, quaternions, octonions, and beyond, in ever higher dimensions (notably in relation to super-string theories). I have no doubt that there is something deep, and probably enlightening and even useful in such theories, and I readily disqualify myself as to the ability to judge the contributions that my friend made to the topic from a technical point of view; no doubt they were brilliant in one way or another. Yet, the way he was talking about this topic immediately triggered the "crackpot" flag; he was looking there for much more than could possibly be found, and anyone (like me) capable of acknowledging being too stupid to fathom the Full Glory of these number structures yet able to find some meaning in life could have told that no, this topic doesn't hold key to The Ultimate Source of All Meaning in Life. Bob's intellectual quest, as exaggeratedly exalted as it might have been, and as interesting as it was to his own exceptional mind, was on the grand scale of things but some modestly useful research venue at best, and an inoffensive pastime at worst. Perhaps Bob could conceivably used his vast intellect towards pursuits more useful to you and I; but we didn't own his mind, and we have no claims to lay on the wonders he could have created but failed to by putting his mind into one quest rather than another. First, Do No Harm. Bob didn't harm any one, and his ideas certainly contained no hint of any harm to be done to anyone.

Unhappily, that is not always the case of every intelligent man's fantasies. Let's consider a discussion I was having recently, that prompted this article. Last week, I joined a dinner-discussion with a lesswrong meetup group: radical believers in rationality and its power to improve life in general and one's own life in particular. As you can imagine, the attendance was largely, though not exclusively, composed of male computer geeks. But then again, any club that accepts me as a member will probably be biased that way: birds of the feather flock together. No doubt, there are plenty of meetup groups with the opposite bias, gathering desperately non-geeky females to the almost exclusion of males. Anyway, the theme of the dinner was "optimal philanthropy", or how to give time and money to charities in a way that maximizes the positive impact of your giving. So far, so good.

But then, I found myself in a most disturbing private side conversation with the organizer, Jeff Kaufman (a colleague, I later found out), someone I strongly suspect of being in many ways saner and more intelligent than I am. While discussing utilitarian ways of evaluating charitable action, he at some point mentioned some quite intelligent acquaintance of his who believed that morality was about minimizing the suffering of living beings; from there, that acquaintance logically concluded that wiping out all life on earth with sufficient nuclear bombs (or with grey goo) in a surprise simultaneous attack would be the best possible way to optimize the world, though one would have to make triple sure of involving enough destructive power that not one single strand of life should survive or else the suffering would go on and the destruction would have been just gratuitous suffering. We all seemed to agree that this was an absurd and criminal idea, and that we should be glad the guy, brilliant as he may be, doesn't remotely have the ability to implement his crazy scheme; we shuddered though at the idea of a future super-human AI having this ability and being convinced of such theories.

That was not the disturbing part though. What tipped me off was when Jeff, taking the "opposite" stance of "happiness maximization" to the discussed acquaintance's "suffering minimization", seriously defended the concept of wireheading as a way that happiness may be maximized in the future: putting humans into vats where the pleasure centers of their brains will be constantly stimulated, possibly using force. Or perhaps instead of humans, using rats, or ants, or some brain cell cultures or perhaps nano-electronic simulations of such electro-chemical stimulations; in the latter cases, biological humans, being less-efficient forms of happiness substrate, would be done away with or at least not renewed as embodiments of the Holy Happiness to be maximized. He even wrote at least two blog posts on this theme: hedonic vs preference utilitarianism in the Context of Wireheading, and Value of a Computational Process. In the former, he admits to some doubts, but concludes that the ways a value system grounded on happiness differ from my intuitions are problems with my intutions.

I expect that most people would, and rightfully so, find Jeff's ideas as well as his acquaintance's ideas to be ridiculous and absurd on their face; they would judge any attempt to use force to implement them as criminal, and they would consider their fantasied implemention to be the worst of possible mass murders. Of course, I also expect that most people would be incapable of arguing their case rationally against Jeff, who is much more intelligent, educated and knowledgeable in these issues than they are. And yet, though most of them would have to admit their lack of understanding and their absence of a rational response to his arguments, they'd be completely right in rejecting his conclusion and in refusing to hear his arguments, for he is indeed the sorely mistaken one, despite his vast intellectual advantages.

I wilfully defer any detailed rational refutation of Jeff's idea to some future article (can you without reading mine write a valuable one?). In this post, I rather want to address the meta-point of how to address the seemingly crazy ideas of our intellectual superiors. First, I will invoke the "conservative" principle (as I'll call it), well defended by Hayek (who is not a conservative): we must often reject the well-argued ideas of intelligent people, sometimes more intelligent than we are, sometimes without giving them a detailed hearing, and instead stand by our intuitions, traditions and secular rules, that are the stable fruit of millenia of evolution. We should not lightly reject those rules, certainly not without a clear testable understanding of why they were valid where they are known to have worked, and why they would cease to be in another context (see Chersterton's Fence). Second, we should not hesitate to use proxy in an eristic argument: if we are to bow to the superior intellect of our better, it should not be without having pitted said presumed intellects against each other in a fair debate to find out if indeed there is a better whose superior arguments can convince the others or reveal their error. Last but not least, beyond mere conservatism or debate, mine is the Libertarian point: there is Universal Law, that everyone must respect, whereby peace between humans is possible inasmuch and only inasmuch as they don't initiate violence against other persons and their property. And as I have argued in another previous essay (hardscrapple), this generalizes to maintaining peace between sentient beings of all levels of intelligence, including any future AI that Jeff may be prone to consider. Whatever the one's prevailing or dissenting opinions, the initiation of force is never to be allowed unpunished as a means to further any ends. Rather than doubt his intuition, Jeff should have been tipped that his theory was wrong and taken out of context by the very fact that it advocates or condones massive violation of this Universal Law. Criminal urges, mass-criminal at that, are a strong stench that should alert anyone that some ideas have gone astray, even when it might not be immediately obvious where exactly they started parting from the path of sanity.

Now, you might ask, it is good and well to poke fun at the crazy ideas that some otherwise intelligent people may hold; it may even allow one to wallow in a somewhat justified sense of intellectual superiority over people who otherwise are actually and objectively so one's intellectual superiors. But is there a deeper point? Is it relevant what crazy ideas intellectuals hold, whether inoffensive or criminal? Sadly, it is. As John McCarthy put it, "Soccer riots kill at most tens. Intellectuals' ideological riots sometimes kill millions." Jeff's particular crazy idea may be mostly harmless: the criminal raptures of the overintelligent nerd, that are so elaborate as to be unfathomable to 99.9% of the population, are unlikely to ever spread to enough of the power elite to be implemented. That is, unless by some exceptional circumstance there is a short and brutal transition to power by some overfriendly AI programmed to follow such an idea. On the other hand, the criminal raptures of a majority of the more mediocre intellectual elite, when they further possess simple variants that can intoxicate the ignorant and stupid masses, are not just theoretically able to lead to mass murder, but have historically been the source of all large-scale mass murders so far; and these mass murders can be counted in hundreds of millions, over the XXth century only, just for Socialism. Nationalism, Islamism and Social-democracy (the attenuated strand of socialism that now reigns in Western "Democracies") count their victims in millions only. And every time, the most well-meaning of intellectuals build and spread the ideologies of these mass-murders. A little initial conceptual mistake, properly amplified, can do that.

And so I am reminded of the meetings of some communist cells that I attended out of curiosity when I was in high-school. Indeed, trotskyites are very openly recruiting in "good" French high-schools. It was amazing the kind of non-sensical crap that these obviously above-average adolescent could repeat. "The morale of the workers is low." Whoa. Or "The petite-bourgeoisie" is plotting this or that. Apparently, grossly cut social classes spanning millions of individuals act as one man, either afflicted with depression or making machiavelian plans. Not that any of them knew much of either salaried workers or entrepreneurs but through one-sided socialist literature. If you think that the nonsense of the intellectual elite is inoffensive, consider what happens when some of them actually act on those nonsensical beliefs: you get terrorists who kill tens of people; when they lead ignorant masses, they end up killing millions of people in extermination camps or plain massacres. And when they take control of entire universities, and train generations of scholars, who teach generations of bureaucrats, politicians, journalists, then you suddenly find that all politicians agree on slowly implementing the same totalitarian agenda, one way or another.

If you think that control of universities by left-wing ideologists is just a French thing, consider how for instance, America just elected a president whose mentor and ghostwriter was the chief of a terrorist group made of Ivy League educated intellectuals whose overriding concern about the country they claimed to rule was how to slaughter ten percent of its population in concentration camps. And then consider that the policies of this president's "right wing" opponent are indistinguishable from the policies of said president. The violent revolution has given way to the slow replacement of the elite, towards the same totalitarian ideals, coming to you slowly but relentlessly rather than through a single mass criminal event. Welcome to a world where the crazy ideas of intelligent people are imposed by force, cunning and superior organization upon a mass of less intelligent yet less crazy people.

Ideas have consequences. That's why everyone Needs Philosophy.

Comments

And then I had to find that my naive theory of intelligence didn't hold water: intelligent people were just as prone as less intelligent people to believing in obviously absurd superstitions. Only their superstitions would be much more complex, elaborate, rich, and far reaching than an inferior mind's superstitions.

Intelligent people more often (though not always) prefer to hold beliefs supported by argument. But they're also more capable of constructing arguments to support their absurd beliefs, if they're so inclined.

I do think that you're overstating Jeff's arguments here, though. In particular, I think that your statement that Jeff "concludes that the ways a value system grounded on happiness differ from my intuitions are problems with my intutions" is a misleading quote. The full sentence is, "But when happiness comes so close to fitting I have to consider that it may be right and the ways a value system grounded on happiness differ from my intuitions are problems with my intutions," and that "I have to consider that it may be right" is significant. He's saying that he "has to consider" that the difference may be a problem with his intuitions, you say he "concludes" that it is.

As a side-note, Yudkowsky's arguments regarding Coherent Extrapolated Volition seem to be trying to save preference utilitarianism from exactly the sort of dead-ends Jeff's arguments indicate.
eyes black and white

March 2014

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829
3031     

Tags

Powered by LiveJournal.com