Monday, September 29, 2008

Eliezer Yudkowsky's Friendly AI Project

[Some people may want to skip straight to instructions for making harmless AIs rather than reading about the many things wrong with that crazy bastard yudkowsky. - 12 mar 2011]

I've recently been debating the merits of Eliezer Yudkowsky's Friendly AI project. And by project I mean obsession since there doesn't seem to be any project at all, or even a few half-baked ideas for that matter. Well, to my mild surprise, since this is someone I nominally respect, I have discovered that I believe he is a complete fucking idiot.

Eliezer believes strongly that AI are unfathomable to mere humans. And being an idiot, he is correct in the limited sense that AI are definitely unfathomable to him. Nonetheless, he has figured out that AI have the potential to be better than human beings. And like any primitive throwback presented with something potentially threatening, he has gone in search of a benevolent diety (the so-called Friendly AI) to swear fealty to in exchange for protection against the evil god.

Well, let's examine this knee-jerk fear of superhumans a little more closely.

First, there are order of magnitude differences in productivity between different programmers. If we count in the people that can never program at all then we can say there are orders of magnitude difference. If we throw in creativity then there are still more orders of magnitude difference between an average human and a top human. And somehow they've managed to coexist without trying to annihilate each other. So what does it matter if AI are orders of magnitude faster or more intelligent than the average human? Or even than the top human?

Second, extremely high intelligence is not at all correlated with income or power. The correlation between intelligence and income is high precisely until you reach the extreme ends of the scale at which point it completely decouples. There is absolutely no reason to believe, except in the nightmares of idiots, that vastly superior intelligence translates into any form of power. This knee-jerk fear of superior intelligence is yet another reason to think Eliezer is an idiot.

Third, any meaningful accomplishment in a modern technological society takes the collaborative efforts of thousands of people. A nuclear power plant takes thousands of people to design, build and operate. Same with airplanes. Same with a steel mill. Same with an automated factory. Same with a chip fab. So let's say you have an AI that's one thousand times smarter than a human. Wow, it can handle a whole plant! That's so terrifying! Run for your life!

There's six billion people on the planet. Say one billion of them are educated. Well, that far outstrips any prototype AI we'll manage to build. And the notion that a psychopathic or sadistic AI will just bide its time until it becomes powerful enough to destroy all of humanity in one fell swoop ... is fucking ludicrous.

Going on, the notion that the very first AI humans manage to build will be some kind of all-powerful deity that can run an entire industrial economy all by its lonesome ... is fucking ludicrous. It isn't going to be that way. Not least because the supposed "Moore's law" is a bunch of crap.

And even if that were so, the notion that humans would provide access to the external world to a single all-powerful entity ... vastly overestimates humans' ability to trust the foreign and alien. And frankly, if humans were so stupid as to let a never before known and unique on the entire planet entity out of its cage (the Skynet scenario) then they're going to get what they deserve.

Honestly, I think the time to worry about AI ethics will be after someone makes an AI at the human retard level. Because the length of time between that point and "superhuman AI that can single-handedly out-think all of humanity" will still amount to a substantial number of years. At some point in those substantial number of years, someone who isn't an idiot will cotton on to the idea that building a healthy AI society is more important than building a "friendly" AI.

Having slammed Eliezer so much, I'm sure an apologist of his would try to claim that Eliezer is concerned with late-stage AIs with brains the size of Jupiter. Notwithstanding the fact that this isn't what Eliezer says and that he's quite clear about what he does say elsewhere, I am extremely hostile to the idea of humanity hanging around for the next thousand years.

Rationality dictates there be an orderly transition from a human-based to an AI-based civilization and nothing more. Given my contempt for most humans, I really don't want them to stick around to muck up the works. Demanding that a benevolent god keeps homo sapiens sapiens around until the stars grow cold is just chauvinistic provincialism.

Finally, anyone who cares about AI should read Alara Rogers' stories where she describes the workings of the Q Continuum. In them, she works through the implications of the Q being disembodied entities that share thoughts. In other words, this fanfiction writer has come up with more insights into the nature of artificial intelligence off-the-cuff than Eliezer Yudkowsky, the supposed "AI researcher". Because all Eliezer could think of for AI properties is that they are "more intelligent and think faster". What a fucking idiot.

There's at least one other good reason why I'm not worried about AI, friendly or otherwise, but I'm not going to go into it for fear that someone would do something about it. This evil hellhole of a world isn't ready for any kind of AI.