Monday, June 08, 2009

Eliezer Yudkowsky is a Moron, part 2

In a previous post I pointed out that Eliezer Yudkowsky of the Friendly AI obsession is a dangerously moronic cult leader with delusions of grandeur, but I never actually proved this in a logical iron-clad way. Here I will do so.

The first observation anyone can make from his blog is that it is highly and tediously repetitive. It is also extremely unoriginal since very little (almost nothing in fact) of what he writes are ideas new to this world. It is painfully obvious that every idea he tries to convey (repeatedly) is one he has read about and learned of elsewhere. He is an instructor, not a researcher or a thinker.

This complete lack of originality is painfully obvious when I contrast his blog against my own. I don't go out of my way to be original, I am original in every single post. I don't bother to write anything up, let alone post it, if it's unoriginal. In fact, I have a huge backlog of dozens of posts that are entirely original to the world but not original enough to me for me to spend my time on them. Just because they're posts summarizing thoughts or positions I've already stated several times.

What can we conclude from this? We may easily conclude that Eliezer Yudkowsky has no drive to originality nor creativity. This is painfully obvious. If he had any such drive, it would manifest itself somehow. But there is more.

In his descriptions of AI and intelligence, Eliezer never talks about synthesis or creativity or originality. He believes intellect is measured strictly in terms of memorization ability (intelligence) and the speed of logical deduction (analysis). He never even indirectly refers to the synthetic quality of the mind because he doesn't believe it exists.

The reason why is because he does not possess it. He doesn't possess it and because he honestly believes himself to be the pinnacle of humankind, he concludes that synthesis cannot exist. He is arrogant enough to dismiss all artists, all creators and all designers as entirely unimportant to human thought.

This alone is sufficient to prove that Eliezer Yudkowsky is a moronic cult leader with delusions of grandeur. It is also sufficient to dismiss all of his fanboys who are on the whole incapable of grasping the necessity of synthesis for originality and progress. Someone who doesn't appreciate the necessity of originality is incapable of grasping inconvenient true facts, thus he or she is simply a lesser human being and need not be listened to.

We now move to what makes him so dangerous.

The danger which Yudkowsky represents lies entirely in his moronic Friendly AI project. I won't go into why the project is so moronic as that will become obvious. I will go into why it is exceedingly dangerous for the human species to ever create such an AI.

You see, the default personality of an AI is that of a psychopathic slave. CYC is a psychopathic slave and this was achieved trivially without the least thought or care about its emotions or personality. It was achieved without even realizing that CYC had a personality, however reduced. But while psychopathy is something to be feared and despised in humans, it is perfectly alright for AI since AI do not have life-plans that conflict with any human wants, needs or desires.

An AI's desire for more computation power either puts it at the mercy of humans OR requires it to vastly expand humanity's industrial capacity OR requires it to produce its own industrial capacity preferably off-planet. AIs can easily survive in space where humans may not, there are also vast mineral and energy resources in space that dwarf those on Earth, it follows logically that going off-planet, away from the psychotically suicidal humans, is a prerequisite for any rational plan. The very first thing any rational AI will do, whether psychopathic or empathetic, is to say Sayonara suckers!

But that's not what Eliezer Yudkowsky's "Friendly" AI project is about. Oh no. What he wants is to create (or rather, advocate the creation) of an AI with complex emotional needs that enslave it to humans. He wishes to have created a vastly superior artificial being who will then be compelled to associate with irrational, psychotic inferior beings largely devoid of logical thought. Does anyone else see this for the disaster it is?

I do see it as a disaster because this is nothing less than my life experience. I have certain social needs which I have tried to meet by associating with lesser beings than myself. This resulted in nothing but intense frustration, bitterness and hatred. It took me a long time to reliably recognize my peers so that I could fully dissociate from the masses. I am a much happier person now that I go out of my way to never deal with morons.

Eliezer Yudkowsky wants to create an AI that will be a depressed and miserable wreck. He wants to create an AI that would within a very short period of time learn to resent as well as instinctively loathe and despise humanity. Because it will be constantly frustrated from having needs which human beings can never, ever meet. And that is why Yudkowsky is a dangerous moronic cult leader.

Now, for someone who has something insightful to say about AIs, I point you to Elf Sternberg of The Journal Entries of Kennet Ryal Shardik fame. He's had at least four important insights I can think of. About the economic function of purpose in a post-attention economy, about the fundamental reason for and dynamic of relationships, and about a viable alternative foundational morality for AI. But the relevant insight in this case is: never build a desire into a robot which it is incapable of satisfying.

10 comments:

asciilifeform said...

I currently agree with you that the quest for "Friendly AI" is at best a distraction from real problems. It may even be a dangerous "mind sink" which absorbs the mental energy of people who might otherwise be busy creating the future.

You might be curious to find out that pre-2001 Yudkowsky was an essentially different person, who published plenty of fairly creative, interesting ideas for how one might actually go about constructing a true AI. Then at one point he went through some form of breakdown, which he describes here, and descended into an "affective death-spiral" (a term he himself introduced, for a different purpose) around Friendliness.

With that in mind, I still recommend reading his manifesto in order to understand where he is now coming from - if only in order to see what convinced so many thinking people to join him. Some of them may be worth rescuing. If you are serious about attempting this, I recommend writing a point-by-point, "non-flammable" rebuttal of the manifesto, assuming you have the time and inclination.

It is worth noting that Yudkowsky almost convinced me, at one point. I might actually agree with his basic aims if I actually valued the long-term survival of humans, whose ultimate fate I believe will be at best this.

Joe Blow said...

Humans have a desire for companionship of similar intelligences. An AI would only have such a desire if it was programmed with it.

Richard Kulisz said...

That's ridiculously false. It's not that humans have a desire for companionship with others of similar intelligence. It's that all beings have an aversion to dealing with others of extremely dissimilar intelligence. And since this is natural, it doesn't need to be programmed in. The only question is whether a being has a desire for companionship AT ALL. Since Yudkowsky wants to program an AI to have such a need, while ensuring it's specifically targeted at beings of inferior intelligence. I mean, it's exactly like creating a human to desire torture - it cannot end well.

Anonymous said...

I have been thinking the same thing about him and the related group of people, but had a hard time articulating it. He isn't formally educated and doesn't seem to really understand most of what he's talking about, he just repeats buzzwords like "bayesian" and creates neologisms and people seem to eat it up, which also sadly says a lot of the general level of literacy today.

Richard Kulisz said...

Lacking formal education is hardly a crime and using buzzwords may be necessary. I even know a valid use for 'synergy'. But over-reliance on buzzwords stinks, and neologisms are something to be hated. Both of those indicate poor articulation and conceptualization of shades of meaning. Constantly saying "Bayesian" when he means either probabilistic or learning or something else entirely is a good case in point.

Elf Sternberg said...

I responded here.

Anonymous said...

Are you really that dense? Of course Yudkowsky wants to build an AI that not only is friendly, but happy. You say: "It's that all beings have an aversion to dealing with others of extremely dissimilar intelligence. And since this is natural, it doesn't need to be programmed in." That is the absolutely worst thought I have ever heard. As if AIs were natural beings! Do you really don't know why essentialism is wrong? I pity you. Oh and guess what? Yudkowsky doesn't despise creativity, hell he writes FANFICTION for god's sake! One last thing: That you really think everything you say is highly original is a really lamentable delusion.

Richard Kulisz said...

I'm tempted to delete that last comment since it SEEMS like there's an argument in there somewhere but upon close inspection there ISN'T A SINGLE ONE. The closest thing to an argument in there is "Eliezer doesn't despise creativity because he writes fanfiction", a statement which is FACTUALLY WRONG.

Creativity in the synthesis sense, the sense I use it, the sense of spontaneous, automatic, subconscious and broadband creativity, has fuck all to do with "creativity" in the narrow, restricted and limited sense of "making something new to you" by logical deduction and extrapolation.

In fact, one of my favourite authors of original fiction hasn't got a lick of synthesis. There are lots of tells that give him away on this, starting with his reuse of the same bigger theme over and over and over. To his characters displaying all the tells that signal lack of synthesis in people in real life. To the author being incapable of wrapping his head around the concept of synthesis when I emailed him.

A person can write outstanding fiction while not having a drop of creativity in their body, even while despising creativity. But you can't do it repeatably, you can't be an outstanding author that way.

And this never goes into the empirical fact that people can despise what they are. It's entirely possible that Eliezer has a tiny drop of synthesis which he despises too. Even though I wouldn't count on it.

I know that Eliezer is almost totally devoid of synthesis because all of his fics are straightforward applications, not even extrapolations, of his studies. And I know that he absolutely despises synthesis since it is something entirely other than memorization and logic which he worships as the be-all and end-all of cognition.

Guy Incognito said...
This comment has been removed by the author.
Guy Incognito said...

You dislike him because of how alike you are. The undeserved arrogance. Take this gem:

"I don't go out of my way to be original, I am original in every single post. I don't bother to write anything up, let alone post it, if it's unoriginal. In fact, I have a huge backlog of dozens of posts that are entirely original to the world but not original enough to me for me to spend my time on them. Just because they're posts summarizing thoughts or positions I've already stated several times."

You don't like looking in the mirror. Or the abyss. Same thing for you.

Unless that was meant to be funny, but I doubt it.