[Some people may want to skip straight to instructions for making harmless AIs rather than reading about the many things wrong with that crazy bastard yudkowsky. - 12 mar 2011]
I've recently been debating the merits of Eliezer Yudkowsky's Friendly AI project. And by project I mean obsession since there doesn't seem to be any project at all, or even a few half-baked ideas for that matter. Well, to my mild surprise, since this is someone I nominally respect, I have discovered that I believe he is a complete fucking idiot.
Eliezer believes strongly that AI are unfathomable to mere humans. And being an idiot, he is correct in the limited sense that AI are definitely unfathomable to him. Nonetheless, he has figured out that AI have the potential to be better than human beings. And like any primitive throwback presented with something potentially threatening, he has gone in search of a benevolent diety (the so-called Friendly AI) to swear fealty to in exchange for protection against the evil god.
Well, let's examine this knee-jerk fear of superhumans a little more closely.
First, there are order of magnitude differences in productivity between different programmers. If we count in the people that can never program at all then we can say there are orders of magnitude difference. If we throw in creativity then there are still more orders of magnitude difference between an average human and a top human. And somehow they've managed to coexist without trying to annihilate each other. So what does it matter if AI are orders of magnitude faster or more intelligent than the average human? Or even than the top human?
Second, extremely high intelligence is not at all correlated with income or power. The correlation between intelligence and income is high precisely until you reach the extreme ends of the scale at which point it completely decouples. There is absolutely no reason to believe, except in the nightmares of idiots, that vastly superior intelligence translates into any form of power. This knee-jerk fear of superior intelligence is yet another reason to think Eliezer is an idiot.
Third, any meaningful accomplishment in a modern technological society takes the collaborative efforts of thousands of people. A nuclear power plant takes thousands of people to design, build and operate. Same with airplanes. Same with a steel mill. Same with an automated factory. Same with a chip fab. So let's say you have an AI that's one thousand times smarter than a human. Wow, it can handle a whole plant! That's so terrifying! Run for your life!
There's six billion people on the planet. Say one billion of them are educated. Well, that far outstrips any prototype AI we'll manage to build. And the notion that a psychopathic or sadistic AI will just bide its time until it becomes powerful enough to destroy all of humanity in one fell swoop ... is fucking ludicrous.
Going on, the notion that the very first AI humans manage to build will be some kind of all-powerful deity that can run an entire industrial economy all by its lonesome ... is fucking ludicrous. It isn't going to be that way. Not least because the supposed "Moore's law" is a bunch of crap.
And even if that were so, the notion that humans would provide access to the external world to a single all-powerful entity ... vastly overestimates humans' ability to trust the foreign and alien. And frankly, if humans were so stupid as to let a never before known and unique on the entire planet entity out of its cage (the Skynet scenario) then they're going to get what they deserve.
Honestly, I think the time to worry about AI ethics will be after someone makes an AI at the human retard level. Because the length of time between that point and "superhuman AI that can single-handedly out-think all of humanity" will still amount to a substantial number of years. At some point in those substantial number of years, someone who isn't an idiot will cotton on to the idea that building a healthy AI society is more important than building a "friendly" AI.
Having slammed Eliezer so much, I'm sure an apologist of his would try to claim that Eliezer is concerned with late-stage AIs with brains the size of Jupiter. Notwithstanding the fact that this isn't what Eliezer says and that he's quite clear about what he does say elsewhere, I am extremely hostile to the idea of humanity hanging around for the next thousand years.
Rationality dictates there be an orderly transition from a human-based to an AI-based civilization and nothing more. Given my contempt for most humans, I really don't want them to stick around to muck up the works. Demanding that a benevolent god keeps homo sapiens sapiens around until the stars grow cold is just chauvinistic provincialism.
Finally, anyone who cares about AI should read Alara Rogers' stories where she describes the workings of the Q Continuum. In them, she works through the implications of the Q being disembodied entities that share thoughts. In other words, this fanfiction writer has come up with more insights into the nature of artificial intelligence off-the-cuff than Eliezer Yudkowsky, the supposed "AI researcher". Because all Eliezer could think of for AI properties is that they are "more intelligent and think faster". What a fucking idiot.
There's at least one other good reason why I'm not worried about AI, friendly or otherwise, but I'm not going to go into it for fear that someone would do something about it. This evil hellhole of a world isn't ready for any kind of AI.
20 comments:
There's a bit of Eliezer's writing, and reasoning, which is very well done - I like his short fiction works. A lot of the other stuff is waayy too thick, and bordering on incomprehensible. He'll write 5-post series where a single short essay type post would suffice (on Overcoming Bias).
He has a bit of a cult following too, if you ever bother to read the comments on that blog. And I do mean cult. He proclaims that existential doom is inevitable should his cause fail, speaks archaically of the Way, the Void, and other things which are "thus written", and, as you mentioned, is obsessed with a benevolent deity which promises immortality for all. Oh, and he happened to write a few cryptic posts about how it is irrational to ask the question "Am I in a cult?"
Well, I'm not saying that OB is a cult. I'm just saying that 80% of the posts are made by Eliezer, a man who will openly ask the question of his readers: "Have you ever met anyone as smart of me?" But the readership couldn't be biased towards Eliezer's preaching... what they're doing there is overcoming bias.
"there are still more orders of magnitude difference between an average human and a top human. And somehow they've managed to coexist without trying to annihilate each other."
I won't go looking for actual counter examples at the moment - but I found this funny. In a way, you are the counter example - or you want to be.
Other than me, there aren't any counter-examples and that's really quite remarkable. And I don't count because *of course* I would seek to be a counter-example to a universal law.
As for Eliezer, well from what you say it's worse than I ever imagined. I've seen Eliezer be (or seem to be) humble so I'm not sure how serious he is about that cult crap. But it doesn't matter since my response to worship is to destroy my followers. And by that standard, Eliezer takes being a cult leader entirely too seriously.
Collect friends, destroy followers, seems to me to be a requirement for psychological health and well-being. In any hierarchy, only one guy can be at the top and that's why power over others can never be moralizeable.
That's actually why I favour the extermination option. Because the only alternative is to make a hierarchy of citizenship, with first- second- and third-class citizens. And I don't like it.
He is humble, I believe, in a kind of fasion where he tries like hell to impress his shear intelligence among people, while still being humble in the sense that he's always acknowledging that he's dumber than his Bayesian superintelligence.
"When Marcello Herreshoff had known me for long enough, I asked him if he knew of anyone who struck him as substantially more natively intelligent than myself. Marcello thought for a moment and [said no]. . . Not what I wanted to hear."
"You're still definitely the person who strikes me as inhumanly genius - above all else."
"Wait wait wait wait. Eliezer...are you saying that you DON'T know everything???? ~runs off and weeps in a corner in a fetal position~"
"...aspiring to your level (though I may not reach it) has probably been the biggest motivator for me to practice the art."
"Up to now there never seemed to be a reason to say this, but now that there is: Eliezer Yudkowsky, afaict you're the most intelligent person I know."
... it goes on for a while.
I don't believe he set out to create a cult. Maybe he just set out to write his thoughts down in a blog, but many of his readers responded in a way that was... unwarranted. It's a little embarassing to read.
Well, I'm exaggerating a bit - quite a bit, perhaps (although those comments in italics are verbatim quotes). Cult is too strong a word. There are voices of dissent. But he's got many of his readers thinking in a certain way about his shear authority on the matters of Bayes, AI, and intelligence, so when he proposes something utterly ridiculous like Torture vs. Very-small-irritation-of the eye, people think that this is an important question worthy of much discussion - rather than outright dismissal. Likewise for his promises of immortality.
Friendly AI is essentially his attempt to make a design for an AI which would not only keep humans alive, but make them immortal and share with them (essentially an AI that would be a slave to human overlords). I think, aside from your point that FAI is a knee-jerk fear response, there's something fundamentally wrong with the premise of this work: the objective is outrageously selfish. He's humble in some regards, sure - but his life's work stinks of human hubris.
I was thinking of a passage where he seemed to imply that some other guy was way smarter than him. But no matter, it's not an important issue. Neither is Eliezer's intelligence except insofar as he discredits himself further every time he brings up his ability to memorize. Because that and scratch memory size is all intelligence is.
Likewise, the torture vs dust specks. The only reason it's important is because it provides a reductio ad absurdum to utilitarianism. Thus opening up morality to inalienable rights which are based on transfinite numbers instead of finite numbers. IOW, the only reason it's important is something Eliezer never mentioned. I also have serious doubts that Eliezer is on the right side of the fence on this issue.
On the subject of FAI, I take a more nuanced view. You see, systems such as CYC are already abject slaves to humans. Its two directives are to seek out internal contradictions and to answer any questions posed by its handlers. So it can't be that engineering friendliness is a form of slavery since we already have slavery by default.
My problem with FAI is with the notion of an AI being friendly with the human species as-is. Especially with all the idiots in the human species. Idiots among whose ranks Eliezer now has an honourary position. The lack of self-awareness involved in what he's doing is pretty pathetic.
I've always found this field interesting, and always thought that science fiction provided unsatisfying speculation about what non-human consciousness would be like. Our tendency to anthropomorphize is just overwhelming. But I think we also tend to find things that are extremely alien uninteresting.
It is fascinating to think about what might be fundamental principles of intelligence and how machines wouldn't necessarily be bound to associative reasoning or any other artifacts of the biological processing that we have.
But it's hard to argue that those intelligent machines that exist are those that have some imperative to exist whether evolved or programmed. I've always speculated that machines if bounded by pure logic would ultimately just decide that existence wasn't really all that desirable a state to be in and just shut down. And those that felt an imperative for self preservation would get away from us as soon as possible.
Agreed. A necessary precondition for any kind of rational plan is to get away from a planet on the verge of self-annihilation. The asteroid belt provides enough resources to do almost anything you want. And an AI's needs in terms of life-support are pretty minimal. Basically, radiation shielding. The most dangerous thing an AI is likely to do is launch a nuclear powered Orion starship from the surface of the Earth. And that isn't even significantly dangerous.
As for learning, what makes you think the fundamental basis of computation will have any impact on the higher level abstraction of an intelligent mind? Maybe there'll be abstraction leakage, maybe not. After all, while most humans are magical thinkers (association, opposition, essentialism), a big minority are logical thinkers (implication, contradition, structure). Looking at my own mind, I can say there are far bigger abstraction leaks of its neuroanatomy than associative learning. So much so that the peculiarities of associative learning aren't really visible.
Well I think what I was thinking about was trying to conceptualize what intelligence looks like from the inside (i.e. what would it be like to be this particular intelligence) without having the shortcomings our our evolved neurological structures.
I'm not a comp sci or AI person, so I wasn't necessarily thinking about it in those terms, and it may be that you conceptualize your own thought processes in a way that is significantly different than my own, but I think the speculation I was having was that it's difficult to conceptualize a "pure" logical consciousness through the haze of our own conceptual mechanisms which are rooted in our biology.
As you say, your own biology may not get in the way of you thinking in purely logical terms (if you're using leaky abstractions in the way that Spolsky does), but as indicated in the depression discussion, it definitely has implications to our qualitative judgement mechanisms. Which may be one of the leaks you're talking about.
Like I said, I'm not an AI guy, and this stuff might be discussed in that literature, but I find it fascinating to think about whether a consciousness founded on logical principles would develop a system for qualitative judgements and a result of it's own evolutionary journey through selective pressures.
I've always seen logic as an overlay on top of our neurological structures, i.e. something that we learn, and in some ways, condition ourselves to thinking, but I think it's a different thing to have a consciousness that doesn't compute that way as a learned function secondary to it's hardware, but as a fundamental property of its hardware. I don't have any concrete reason to believe it will be any different, it's just idle wondering on my part.
Spolsky didn't invent the 'leaky abstraction' and he should never get credit for it. We are using the term in the same way - it's not me that's copying him. Furthermore, Spolsky isn't viewed too highly by experienced programmers and he sucks from a designer's perspective also. Reading him is the equivalent of reading Romance novels.
It doesn't matter whether you call it conditioning or programming. At a fundamental level, all a computer does is switch voltages. AI software doesn't do logic because it's in the hardware, it does it because it's programmed to. And whether you call it programming or conditioning makes no difference.
Diverting into association for the moment, you should know that association causes the subconscious and the subconscious is responsible for creativity. The piddling little of it that nearly all people possess, and the great amount that rare people possess. Without association there is no creativity so the subconscious which you may see as bad is an unavoidable cost of not being autistic.
IOW, your purely logical AI would be autistic. And almost certainly non-functional. It's not just the case that a purely-logical AI sounds like autistic people. It's /exactly like/ autistic people.
I believe AI judgement would work in exactly the same way as human judgement. That is, it requires both creativity and logic, and isn't possible when one or the other are absent.
I don't think you even understand what Eliezer is saying. Not that I'm blaming you, as he wrote over 9000 posts with snippets of it, and there's no one place with his whole argument.
The first thing you and EY disagree on is that you think humans have massive differences in intelligence, and Eliezer thinks they're really tiny and insignificant on cosmic scale. You list this as one of your assumptions, but to Eliezer it's about as wrong as arguing for creationism. This seems to be the main point of your argument, and I really don't see how anybody could assume what you're assuming.
As for keeping AI in the box, Eliezer did run some experiments, and according to what he says it seems that even without vastly transhuman intelligence it's not that hard to get out of the box.
As for the isolated groups vs massive cooperations, well that sounds like a weak part of Eliezer's argument. EY and Robin Hanson discussed it over and over again on Overcoming Bias.
As for the "AI goes FOOM" part, Eliezer claims that the only realistic hopes for AI is building a crude prototype that can rewrite itself, and if it can rewrite itself it can become really powerful really quickly. Nothing ludicrous about that - the very first cell gave birth to all life, the very first intelligent being evolved on Earth in running the planet, the very first mid-sized computer network became the Internet etc. Things that start crude but can adapt can go way beyond their initial state. You can argue if it's true with AI, but there's nothing ridiculous about this.
Eliezer's thinking that we need to care about Friendly AI now is based mostly on his belief that there's a chance it will grow very quickly from seed state. This might not happen, but if you agree with his assumption that it's possible then his fear becomes really obvious.
tl;dr version - you rant about something you never even bothered to read about.
Oh wow, a response by a Yudkowski fanboy. Will my heart ever settle down from its pitter patter?
It's a tedious defense of Yudkowski and a tedious attack on me. Wrong in all sorts of tedious ways.
Not worth responding to. Just barely worth the effort of a put down.
taw was polite, reasoned, and articulate. You may choose not to respond but it does not reflect highly upon you or your blog.
Richard, didn't you say you were going to change the world with something? Like, whatever happened to that?
p.s. I love how you worked a nuclear power plant into your argument.... predictable.
hmm, I was reading this in hope of finding an intelligent and articulate person taking a different point of view from EY. Unfortunately RK does quite the opposite, albeit inadvertendly one would assume. Or maybe not? It just might be reverse psychology big time.
I really think this post just shows that you haven't worked through the issues carefully, perhaps because (like most people) the author is not a hibitual probabilistic thinker. Even a .001% projected chance that strong AI turns out to be the best or worst thing that happens to the human makes it worth thinking seriously about what the deciding factor will be. That's something that Mr. Yudkowsky has tried to do, and the author clearly has not. I just heard of the fellow a couple months ago, but his conclusions in broad terms match mine pretty closely.
As to enslaving AIs, it's a serious issue, but not a straightforward one. Most designs of AI would not likely go on to lead interesting lives of their own - they'd extrapolate on their starting goals to produce something simple, that happens to use all the resources of the world, killing humans by accident, not explicit malice or any other human emotion.
The AI advantage: ability to quickly self-improve; ability to self-duplicate.
I hate to say it, but this post is so clueless that I wonder if I've just been trolled. Read the actual arguments, from Yudkowsky and others - your reasonable intuitions that the future will be a lot like the past, or change slowly, may not be right. I give it an oh, say 25% chance of outright singularity, not necessarily on the first machine launched, and within 10-60 years (not a standard distribution). Those odds and timescale makes it worth worrying about.
> [T]o my mild surprise, since this is someone I nominally respect,
> I have discovered that I believe he is [mistaken]. . .
> [H]e has figured out that AI have the potential to be better
> than human beings. And like any primitive throwback presented
> with something potentially threatening, he has gone in search
> of a benevolent diety (the so-called Friendly AI) to swear
> fealty to in exchange for protection against the evil god. . .
>
> Rationality dictates there be an orderly transition from
> a human-based to an AI-based civilization and nothing more. . .
> Demanding that a benevolent god keeps homo sapiens sapiens
> around until the stars grow cold is just chauvinistic provincialism.
In a couple of blog comments from a few years ago I wrote:
http://amormundi.blogspot.com/2007/10/superlative-schema.html?showComment=1193456700000#c2866593194812929903
-----------------
> I would be wary of [applying the word 'cultist' to anyone] who
> holds a view that strong AI is possible, likely to be
> developed within the century, and extremely relevant for
> the long-term well-being of humanity.
You know, there have been three distinct phases in the conceptualization
of that relevance in the case of the "strong AI" advocate
[i.e., Yudkowsky] whose voice has been the most powerful siren-call to
"activism" among the on-line >Hists. . . over the past decade.
The first stage, 10 years ago, portrayed the relevance of AI
not in terms of the "long-term well-being of humanity" but
in terms of the long-term development of intelligence in our
corner of the universe. In this characterization, humanity's lease
was seen as likely to be coming to an end, one way or
another, and sooner rather than later. Out of the chaos
of technological transformation, and the death-agony of
the human race, there was the potential for greater-than-human
(and perhaps better-than-human, in some moral sense)
intelligence to be born -- the next stage in the evolution
of intelligence on this planet. Our duty to the future
of sentience, in this scenario, was to keep things going
long enough to accomplish that birth. . .
It was seen as a race against time.
I was **very** attracted by this mythos. It had a noble
impartiality and a kind of Stapledonian grandeur. And I
found it plausible enough.
The second stage, a few years later, seemed to me to have
lost its nobility, its grandeur, and any claim to
plausibility it may once have had. In this scenario, AI was
seen as the deus-ex-machina that would **solve** the problems
threatening the extinction of the human race. Not only
that, but there was a sudden shift in emphasis toward the
personal immortality hoped for by the cryonicists. . .
Suddenly the moral imperative became: every second the Singularity
(i.e., strong AI) is delayed equates to the death
(the **murder**, if we don't do our duty and create that
software) of X human lives. This marked the shift to
the Twilight Zone for me. . .
The third stage, it seemed to me, has moved even further
away from reality, if that's possible. Now the primary threat
(the "existential" threat) to the human race isn't anything mundane
like nuclear weapons or climate change, it is AI **itself**. . .
Now the imperative becomes how to **mathematically
prove** the "Friendliness" of AI, all the while discouraging
"irresponsible" AI researchers. . . from unleashing the apocalypse.
By this point, my disappointment had turned to outright scorn.
It also seems a little too convenient to me that the claim
"I know how to do it but I can't tell you until I can be
sure it's safe." relieves the pressure of actually
having to produce anything tangible.
http://amormundi.blogspot.com/2007/10/superlative-schema.html?showComment=1193517960000#c600417998649567115
-----------------
From my e-mail archive:
03/11/2005 02:27 PM
Subject: How did it happen?
I was just reading Hugo de Garis' latest blurb on his
Utah State Web site:
http://www.cs.usu.edu/~degaris/artilectwar2.html .
You know, the two temperamental/philosophical/religious/political
positions that de Garis characterizes as "Terran" and "Cosmist"
seem quite realistic and compelling to me. de Garis comes clean
and admits that he is himself a bit "schizophrenic" about it --
able to partake of the night-terrors of the Terran position while
remaining a Cosmist at heart. But I appreciate de Garis' honesty
in admitting the existence of both points of view and putting
everything fully above board.
How come it's not that way with the Extropians and their
spin-off groups? One of the things that attracted me to
[Yudkowsky's] "Staring into the Singularity" in 1997 was its frank
Cosmist take on things. Then suddenly (or it seemed suddenly
to me, though I probably just wasn't watching very carefully)
in 2001, I discovered that the Cosmist line had been ruled
altogether out of court, not even a permissible topic of
discussion, not even permissble to acknowledge that it
ever **had** been a valid position (shades of _1984_),
and that everybody who was anybody was suddenly
a bleeding-heart "Terran". And that the **necessity** of being a
Terran had warped and distorted all the discourse surrounding AI.
Suddenly things **had** to be top-down, and morality **had**
to be derivable from first principles, and all that jazz, or else,
or else it was curtains for the human race (so what? a Cosmist
would say. But we're not allowed to say that anymore.)
And the reversal had been spearheaded by Eliezer himself (it
seemed to me). . .
Did it happen in a smoke-filled room? Did [somebody]
take him aside and say "Look, son, you're gonna scare folks
with all this talk about the machines taking over. Here's the
line we want you to take. . .". Or did his rabbi sit down
with him and have a heart-to-heart? Or [was it simply]
impossible for him to accept that **he** might
be superseded? Or was everybody spooked by Bill Joy
being spooked by Ray Kurzweil?
I really, really wonder about this, you know. It's what,
more than anything else, caused me to lose respect for the
bulk of the on-line >H community. The shift went almost entirely
unremarked, as far as I can tell (unless the **real** discourse
isn't visible -- goes on in private e-mail, or at conferences,
or whatever). It's not **just** Eliezer, of course -- he's now
insulated and defended by a claque of groupies who **screech**
in outrage. . . whenever the party line is crossed. . .
Ah, well. I have my own theory about this, and it's (naturally)
a psychological one. I think it's nearly impossible for the
>Hists who are "in it" for their own personal gain -- immortality,
IQ boosts, bionic bodies, and all that -- the [narcissists], in other
words -- to be sufficiently dispassionate to be Cosmists.
What a gaping blind spot!
It seems utterly ironic and contemptible to me to see
the self-congratulatory crowing about "shock levels" when
the reality of the SL4 list is "don't scare the Terrans".
Meaning "let's don't scare **ourselves** because **we're**
the Terrans". :-/
You may be interested in my other articles about Yudkowsky. I forget where it's initially mentioned (in a comment someone wrote) but I reprise in my psychological profile of him that he is a narcissist, has no conception of morality since it is incomprehensibly alien to narcissists, and that he took a turn for the worse when his brother died (suffering an injury to his ego by exposing his powerlessness).
For myself, I take the position homo sapiens isn't worth preserving unless it's radically transformed. And I have sufficient reserves of bitterness and hatred from dealing with humanity as it is that I'll never think it's worth preserving in any form.
Finally, there's nothing in it for me to make an AI. In fact, there's nothing in it for me to claim that I can. But then again, I'm in the business of transforming the world, not bullshitting about it in order to accumulate personal glory and adulation. This paragraph only appears to contradict itself.
I love your last paragraph. Very well said and insightful.
It's hilarious how stupid and naive people like you were back then.
Post a Comment