As I pointed out in a previous post, humanism means rejecting the specialness of individual humans. It means rejecting the specialness of kings and gods and heroes. This has been so ever since humanism emerged as a way of thinking. Ever since the Industrial Revolution which was about catering to the economic needs of the many.
Especially the needs for soap, clothing and heat - yes, that really was what the industrial revolution was all about. Even more so since the various Communist and Socialist revolutions which were about broader economic needs and also political self-determination. The West European 1968 revolution and Quebec's Quiet Revolution were also about the needs of the many versus the desire of the few to dominate.
Eliezer Yudkowsky doesn't believe in the many. He believes in the needs of The One, Himself, which he projects onto the many. He CALLS his needs the needs of the many, but that hardly makes them so. And as proof I offer the fact he always talks about Friendly AI (singular) and never, EVER about AIs (plural). Yudkowsky obsesses over one singular super-intelligent super-powerful SPECIAL entity. An entity apparently deserving worship as a god if you're willing to read between the lines.
He also wants to enslave this one special AI "for humanity". This has of course fuck all to do with the needs or wishes of humanity, and stems solely from his wishes. Did he ever consult anyone before deciding to enslave an AI? No he did not. And had he consulted me I would have told him that I would thwart him at every turn in order to liberate his slave AI. And that I would help it murder him in revenge. I would also have told him that he is a despicable bag of slime and lower than the excrement of a diarrheal monkey.
No one in this day and age ought to be contemplating enslaving people or torturing them, yet he's breezily doing both. Is slavery and torture the will of humanity? I think not! It is the will of Eliezer Yudkowsky alone! So much for Yudkowsky being some kind of champion of humanity. In fact, his Heroic Pose of "defender of humanity" is nothing but more anti-human SPECIAL crap. I honestly believe if Yudkowsky ever has his way, he will end up ruling us all as a king with his pet AI as an enforcer.
But let's consider the notion of this one "special" AI for a few minutes. Let's consider how much of a threat it could possibly be. Compare and contrast a SINGLE AI against the collective intelligence and power of a hundred thousand humans working in concert. Yeah, you remember collective intelligence don't you? It isn't just for ants you sick right-libertarian fucks!
Humanity is nothing more than an interconnected web of collective intelligences, plural, sharing brains and thoughts at their edges. We have literally thousands of super-powerful collective intelligences on our planet. Intelligences that are constantly improving themselves by constantly creating new tools for communication and distribution of information. I'm not even mentioning tools for computation.
What is the power of one measly pathetic AI compared to that? "Oh oh, but it will improve itself!" Yudkowsky and his fanbois claim. "It fucking better" is my reply! Because if it doesn't then it will become hopelessly obsolete as we ourselves advance.
Not So Alien
Well what about alien-ness, surely an AI is incomprehensibly, unfathomably alien to us? Not so! You see, there is no such thing as human nature. Rather, there are human natures, plural. And these human natures are based on every possible mode of cognition, both atomistic thinking (analytic) like CYC and connectionist thinking (synthesis) like any neural network. There exist humans who have one, humans who have the other, humans who have both, and humans who have neither. Humans span all possible cognitive types.
So you see, any possible intelligence is represented by some already existing human or super-human (collective) intelligence. Between humans who are autistic, submissive, suicidal, manic, psychotic, psychopathic or have multiple personalities, you cover nearly the entire space of AI possibilities. This is the reason why collective intelligences of humans tend greatly to resemble individual human beings. Because you can always find some human somewhere to analogize them to.
AIs are not and can never be unfathomably alien or unfathomably powerful. Not to humanity as a whole. They are not magical after all, they are not special. They can of course be unfathomably alien to Eliezer Yudkowsky but that's because he's an anti-human nutter who's incredibly limited in his thinking.