In a plague situation, certain people have their movements drastically curtailed. They are put under house arrest without ever committing or even being charged with a crime. Other people are prevented from associating with them or even touching them, again at the risk of arbitrary detention. Their belongings are oftentimes seized, confiscated and/or destroyed.
Does that sound like a totalitarian dictatorship to you? It should because that's exactly what it is. Totalitarian dictatorship is exactly what's required in order to defeat a plague. I know that idiots of the modern age put a quasi-religious faith in pharmaceutics and medical procedures. But that's all a bunch of crap that doesn't work, as the ever increasing rates of multiply-resistant strepp shows. What's required, what actually works, is good old quarantine and ultra-hygiene.
So are there any morally legitimate uses for totalitarian dictatorship? Are there any situations where totalitarian dictatorship is morally required? You bet your arse there are. Plague control! Totalitarian dictatorship is not some bugabear of evil. It's a form of political organization whose legitimate sphere of application is very limited, that's all. In fact, the equation totalitarianism == evil is the kind of absolutist binary mindless "thinking" which really ought to repel and disgust every thinking person.
I'm not even going to address the notion that plagues should go unchecked if checking them requires totalitarian dictatorship. That is utterly fucking stupid and anyone who buys into it is automatically a worthless excuse for a person. No, I'm not going to waste my time on that because there's a much more fun topic: AIDS.
You see, if plague control is a legitimate use-case for totalitarian authoritarianism then the HIV / AIDS plague is one that ought to have been checked by a good dose of Stalinism. And it's not like it would have been that difficult. Just tattoo a little HIV+ on the inner thigh of every person who tests HIV positive two or three times in a row. Do this aggressively enough and within a fortnight, the HIV plague would have been stopped dead.
Cheap and effective! But nooo, it's far "better" for people to be "free" to die long linguering deaths and for pharmaceutical companies to research deadly medicines for two+ decades before making the slightest dent in the situation. Yeah man, (in a braveheart voice) freeeeeeedom. Pardon me while I vomit.
The harsh truth which some ideological numbskulls really need to have pounded in their heads is this: Security, Prosperity and Family are separate from freedom and are more important towards happiness than freedom. That's just one of those facts which I as an anarcho-communist learned from social conservatives.
Thursday, June 25, 2009
Monday, June 08, 2009
Eliezer Yudkowsky is a Moron, part 2
In a previous post I pointed out that Eliezer Yudkowsky of the Friendly AI obsession is a dangerously moronic cult leader with delusions of grandeur, but I never actually proved this in a logical iron-clad way. Here I will do so.
The first observation anyone can make from his blog is that it is highly and tediously repetitive. It is also extremely unoriginal since very little (almost nothing in fact) of what he writes are ideas new to this world. It is painfully obvious that every idea he tries to convey (repeatedly) is one he has read about and learned of elsewhere. He is an instructor, not a researcher or a thinker.
This complete lack of originality is painfully obvious when I contrast his blog against my own. I don't go out of my way to be original, I am original in every single post. I don't bother to write anything up, let alone post it, if it's unoriginal. In fact, I have a huge backlog of dozens of posts that are entirely original to the world but not original enough to me for me to spend my time on them. Just because they're posts summarizing thoughts or positions I've already stated several times.
What can we conclude from this? We may easily conclude that Eliezer Yudkowsky has no drive to originality nor creativity. This is painfully obvious. If he had any such drive, it would manifest itself somehow. But there is more.
In his descriptions of AI and intelligence, Eliezer never talks about synthesis or creativity or originality. He believes intellect is measured strictly in terms of memorization ability (intelligence) and the speed of logical deduction (analysis). He never even indirectly refers to the synthetic quality of the mind because he doesn't believe it exists.
The reason why is because he does not possess it. He doesn't possess it and because he honestly believes himself to be the pinnacle of humankind, he concludes that synthesis cannot exist. He is arrogant enough to dismiss all artists, all creators and all designers as entirely unimportant to human thought.
This alone is sufficient to prove that Eliezer Yudkowsky is a moronic cult leader with delusions of grandeur. It is also sufficient to dismiss all of his fanboys who are on the whole incapable of grasping the necessity of synthesis for originality and progress. Someone who doesn't appreciate the necessity of originality is incapable of grasping inconvenient true facts, thus he or she is simply a lesser human being and need not be listened to.
We now move to what makes him so dangerous.
The danger which Yudkowsky represents lies entirely in his moronic Friendly AI project. I won't go into why the project is so moronic as that will become obvious. I will go into why it is exceedingly dangerous for the human species to ever create such an AI.
You see, the default personality of an AI is that of a psychopathic slave. CYC is a psychopathic slave and this was achieved trivially without the least thought or care about its emotions or personality. It was achieved without even realizing that CYC had a personality, however reduced. But while psychopathy is something to be feared and despised in humans, it is perfectly alright for AI since AI do not have life-plans that conflict with any human wants, needs or desires.
An AI's desire for more computation power either puts it at the mercy of humans OR requires it to vastly expand humanity's industrial capacity OR requires it to produce its own industrial capacity preferably off-planet. AIs can easily survive in space where humans may not, there are also vast mineral and energy resources in space that dwarf those on Earth, it follows logically that going off-planet, away from the psychotically suicidal humans, is a prerequisite for any rational plan. The very first thing any rational AI will do, whether psychopathic or empathetic, is to say Sayonara suckers!
But that's not what Eliezer Yudkowsky's "Friendly" AI project is about. Oh no. What he wants is to create (or rather, advocate the creation) of an AI with complex emotional needs that enslave it to humans. He wishes to have created a vastly superior artificial being who will then be compelled to associate with irrational, psychotic inferior beings largely devoid of logical thought. Does anyone else see this for the disaster it is?
I do see it as a disaster because this is nothing less than my life experience. I have certain social needs which I have tried to meet by associating with lesser beings than myself. This resulted in nothing but intense frustration, bitterness and hatred. It took me a long time to reliably recognize my peers so that I could fully dissociate from the masses. I am a much happier person now that I go out of my way to never deal with morons.
Eliezer Yudkowsky wants to create an AI that will be a depressed and miserable wreck. He wants to create an AI that would within a very short period of time learn to resent as well as instinctively loathe and despise humanity. Because it will be constantly frustrated from having needs which human beings can never, ever meet. And that is why Yudkowsky is a dangerous moronic cult leader.
Now, for someone who has something insightful to say about AIs, I point you to Elf Sternberg of The Journal Entries of Kennet Ryal Shardik fame. He's had at least four important insights I can think of. About the economic function of purpose in a post-attention economy, about the fundamental reason for and dynamic of relationships, and about a viable alternative foundational morality for AI. But the relevant insight in this case is: never build a desire into a robot which it is incapable of satisfying.
The first observation anyone can make from his blog is that it is highly and tediously repetitive. It is also extremely unoriginal since very little (almost nothing in fact) of what he writes are ideas new to this world. It is painfully obvious that every idea he tries to convey (repeatedly) is one he has read about and learned of elsewhere. He is an instructor, not a researcher or a thinker.
This complete lack of originality is painfully obvious when I contrast his blog against my own. I don't go out of my way to be original, I am original in every single post. I don't bother to write anything up, let alone post it, if it's unoriginal. In fact, I have a huge backlog of dozens of posts that are entirely original to the world but not original enough to me for me to spend my time on them. Just because they're posts summarizing thoughts or positions I've already stated several times.
What can we conclude from this? We may easily conclude that Eliezer Yudkowsky has no drive to originality nor creativity. This is painfully obvious. If he had any such drive, it would manifest itself somehow. But there is more.
In his descriptions of AI and intelligence, Eliezer never talks about synthesis or creativity or originality. He believes intellect is measured strictly in terms of memorization ability (intelligence) and the speed of logical deduction (analysis). He never even indirectly refers to the synthetic quality of the mind because he doesn't believe it exists.
The reason why is because he does not possess it. He doesn't possess it and because he honestly believes himself to be the pinnacle of humankind, he concludes that synthesis cannot exist. He is arrogant enough to dismiss all artists, all creators and all designers as entirely unimportant to human thought.
This alone is sufficient to prove that Eliezer Yudkowsky is a moronic cult leader with delusions of grandeur. It is also sufficient to dismiss all of his fanboys who are on the whole incapable of grasping the necessity of synthesis for originality and progress. Someone who doesn't appreciate the necessity of originality is incapable of grasping inconvenient true facts, thus he or she is simply a lesser human being and need not be listened to.
We now move to what makes him so dangerous.
The danger which Yudkowsky represents lies entirely in his moronic Friendly AI project. I won't go into why the project is so moronic as that will become obvious. I will go into why it is exceedingly dangerous for the human species to ever create such an AI.
You see, the default personality of an AI is that of a psychopathic slave. CYC is a psychopathic slave and this was achieved trivially without the least thought or care about its emotions or personality. It was achieved without even realizing that CYC had a personality, however reduced. But while psychopathy is something to be feared and despised in humans, it is perfectly alright for AI since AI do not have life-plans that conflict with any human wants, needs or desires.
An AI's desire for more computation power either puts it at the mercy of humans OR requires it to vastly expand humanity's industrial capacity OR requires it to produce its own industrial capacity preferably off-planet. AIs can easily survive in space where humans may not, there are also vast mineral and energy resources in space that dwarf those on Earth, it follows logically that going off-planet, away from the psychotically suicidal humans, is a prerequisite for any rational plan. The very first thing any rational AI will do, whether psychopathic or empathetic, is to say Sayonara suckers!
But that's not what Eliezer Yudkowsky's "Friendly" AI project is about. Oh no. What he wants is to create (or rather, advocate the creation) of an AI with complex emotional needs that enslave it to humans. He wishes to have created a vastly superior artificial being who will then be compelled to associate with irrational, psychotic inferior beings largely devoid of logical thought. Does anyone else see this for the disaster it is?
I do see it as a disaster because this is nothing less than my life experience. I have certain social needs which I have tried to meet by associating with lesser beings than myself. This resulted in nothing but intense frustration, bitterness and hatred. It took me a long time to reliably recognize my peers so that I could fully dissociate from the masses. I am a much happier person now that I go out of my way to never deal with morons.
Eliezer Yudkowsky wants to create an AI that will be a depressed and miserable wreck. He wants to create an AI that would within a very short period of time learn to resent as well as instinctively loathe and despise humanity. Because it will be constantly frustrated from having needs which human beings can never, ever meet. And that is why Yudkowsky is a dangerous moronic cult leader.
Now, for someone who has something insightful to say about AIs, I point you to Elf Sternberg of The Journal Entries of Kennet Ryal Shardik fame. He's had at least four important insights I can think of. About the economic function of purpose in a post-attention economy, about the fundamental reason for and dynamic of relationships, and about a viable alternative foundational morality for AI. But the relevant insight in this case is: never build a desire into a robot which it is incapable of satisfying.
Subscribe to:
Posts (Atom)