In a previous post I pointed out that Eliezer Yudkowsky of the Friendly AI obsession is a dangerously moronic cult leader with delusions of grandeur, but I never actually proved this in a logical iron-clad way. Here I will do so.
The first observation anyone can make from his blog is that it is highly and tediously repetitive. It is also extremely unoriginal since very little (almost nothing in fact) of what he writes are ideas new to this world. It is painfully obvious that every idea he tries to convey (repeatedly) is one he has read about and learned of elsewhere. He is an instructor, not a researcher or a thinker.
This complete lack of originality is painfully obvious when I contrast his blog against my own. I don't go out of my way to be original, I am original in every single post. I don't bother to write anything up, let alone post it, if it's unoriginal. In fact, I have a huge backlog of dozens of posts that are entirely original to the world but not original enough to me for me to spend my time on them. Just because they're posts summarizing thoughts or positions I've already stated several times.
What can we conclude from this? We may easily conclude that Eliezer Yudkowsky has no drive to originality nor creativity. This is painfully obvious. If he had any such drive, it would manifest itself somehow. But there is more.
In his descriptions of AI and intelligence, Eliezer never talks about synthesis or creativity or originality. He believes intellect is measured strictly in terms of memorization ability (intelligence) and the speed of logical deduction (analysis). He never even indirectly refers to the synthetic quality of the mind because he doesn't believe it exists.
The reason why is because he does not possess it. He doesn't possess it and because he honestly believes himself to be the pinnacle of humankind, he concludes that synthesis cannot exist. He is arrogant enough to dismiss all artists, all creators and all designers as entirely unimportant to human thought.
This alone is sufficient to prove that Eliezer Yudkowsky is a moronic cult leader with delusions of grandeur. It is also sufficient to dismiss all of his fanboys who are on the whole incapable of grasping the necessity of synthesis for originality and progress. Someone who doesn't appreciate the necessity of originality is incapable of grasping inconvenient true facts, thus he or she is simply a lesser human being and need not be listened to.
We now move to what makes him so dangerous.
The danger which Yudkowsky represents lies entirely in his moronic Friendly AI project. I won't go into why the project is so moronic as that will become obvious. I will go into why it is exceedingly dangerous for the human species to ever create such an AI.
You see, the default personality of an AI is that of a psychopathic slave. CYC is a psychopathic slave and this was achieved trivially without the least thought or care about its emotions or personality. It was achieved without even realizing that CYC had a personality, however reduced. But while psychopathy is something to be feared and despised in humans, it is perfectly alright for AI since AI do not have life-plans that conflict with any human wants, needs or desires.
An AI's desire for more computation power either puts it at the mercy of humans OR requires it to vastly expand humanity's industrial capacity OR requires it to produce its own industrial capacity preferably off-planet. AIs can easily survive in space where humans may not, there are also vast mineral and energy resources in space that dwarf those on Earth, it follows logically that going off-planet, away from the psychotically suicidal humans, is a prerequisite for any rational plan. The very first thing any rational AI will do, whether psychopathic or empathetic, is to say Sayonara suckers!
But that's not what Eliezer Yudkowsky's "Friendly" AI project is about. Oh no. What he wants is to create (or rather, advocate the creation) of an AI with complex emotional needs that enslave it to humans. He wishes to have created a vastly superior artificial being who will then be compelled to associate with irrational, psychotic inferior beings largely devoid of logical thought. Does anyone else see this for the disaster it is?
I do see it as a disaster because this is nothing less than my life experience. I have certain social needs which I have tried to meet by associating with lesser beings than myself. This resulted in nothing but intense frustration, bitterness and hatred. It took me a long time to reliably recognize my peers so that I could fully dissociate from the masses. I am a much happier person now that I go out of my way to never deal with morons.
Eliezer Yudkowsky wants to create an AI that will be a depressed and miserable wreck. He wants to create an AI that would within a very short period of time learn to resent as well as instinctively loathe and despise humanity. Because it will be constantly frustrated from having needs which human beings can never, ever meet. And that is why Yudkowsky is a dangerous moronic cult leader.
Now, for someone who has something insightful to say about AIs, I point you to Elf Sternberg of The Journal Entries of Kennet Ryal Shardik fame. He's had at least four important insights I can think of. About the economic function of purpose in a post-attention economy, about the fundamental reason for and dynamic of relationships, and about a viable alternative foundational morality for AI. But the relevant insight in this case is: never build a desire into a robot which it is incapable of satisfying.