tag:blogger.com,1999:blog-34203016.post8868783661844037098..comments2024-01-22T16:54:30.446-08:00Comments on Richard Kulisz: Eliezer Yudkowsky's Friendly AI ProjectRichard Kuliszhttp://www.blogger.com/profile/05450367878517586463noreply@blogger.comBlogger20125tag:blogger.com,1999:blog-34203016.post-39052522431595453192023-04-21T21:44:21.256-07:002023-04-21T21:44:21.256-07:00It's hilarious how stupid and naive people lik...It's hilarious how stupid and naive people like you were back then.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-34203016.post-41366015784559851772011-06-10T06:24:07.155-07:002011-06-10T06:24:07.155-07:00I love your last paragraph. Very well said and ins...I love your last paragraph. Very well said and insightful.Richard Kuliszhttps://www.blogger.com/profile/05450367878517586463noreply@blogger.comtag:blogger.com,1999:blog-34203016.post-23872981602180810702011-06-10T06:19:34.223-07:002011-06-10T06:19:34.223-07:00You may be interested in my other articles about Y...You may be interested in my other articles about Yudkowsky. I forget where it's initially mentioned (in a comment someone wrote) but I reprise in my psychological profile of him that he is a narcissist, has no conception of morality since it is incomprehensibly alien to narcissists, and that he took a turn for the worse when his brother died (suffering an injury to his ego by exposing his powerlessness).<br /><br />For myself, I take the position homo sapiens isn't worth preserving unless it's radically transformed. And I have sufficient reserves of bitterness and hatred from dealing with humanity as it is that I'll never think it's worth preserving in any form.<br /><br />Finally, there's nothing in it for me to make an AI. In fact, there's nothing in it for me to claim that I can. But then again, I'm in the business of transforming the world, not bullshitting about it in order to accumulate personal glory and adulation. This paragraph only appears to contradict itself.Richard Kuliszhttps://www.blogger.com/profile/05450367878517586463noreply@blogger.comtag:blogger.com,1999:blog-34203016.post-1199135342225086432011-06-09T19:53:55.252-07:002011-06-09T19:53:55.252-07:00http://amormundi.blogspot.com/2007/10/superlative-...http://amormundi.blogspot.com/2007/10/superlative-schema.html?showComment=1193517960000#c600417998649567115<br />-----------------<br />From my e-mail archive:<br /><br />03/11/2005 02:27 PM<br />Subject: How did it happen?<br /><br />I was just reading Hugo de Garis' latest blurb on his<br />Utah State Web site:<br />http://www.cs.usu.edu/~degaris/artilectwar2.html .<br /><br />You know, the two temperamental/philosophical/religious/political<br />positions that de Garis characterizes as "Terran" and "Cosmist"<br />seem quite realistic and compelling to me. de Garis comes clean<br />and admits that he is himself a bit "schizophrenic" about it --<br />able to partake of the night-terrors of the Terran position while<br />remaining a Cosmist at heart. But I appreciate de Garis' honesty<br />in admitting the existence of both points of view and putting<br />everything fully above board.<br /><br />How come it's not that way with the Extropians and their<br />spin-off groups? One of the things that attracted me to<br />[Yudkowsky's] "Staring into the Singularity" in 1997 was its frank<br />Cosmist take on things. Then suddenly (or it seemed suddenly<br />to me, though I probably just wasn't watching very carefully)<br />in 2001, I discovered that the Cosmist line had been ruled<br />altogether out of court, not even a permissible topic of<br />discussion, not even permissble to acknowledge that it<br />ever **had** been a valid position (shades of _1984_),<br />and that everybody who was anybody was suddenly<br />a bleeding-heart "Terran". And that the **necessity** of being a<br />Terran had warped and distorted all the discourse surrounding AI.<br />Suddenly things **had** to be top-down, and morality **had**<br />to be derivable from first principles, and all that jazz, or else,<br />or else it was curtains for the human race (so what? a Cosmist<br />would say. But we're not allowed to say that anymore.)<br />And the reversal had been spearheaded by Eliezer himself (it<br />seemed to me). . .<br /><br />Did it happen in a smoke-filled room? Did [somebody]<br />take him aside and say "Look, son, you're gonna scare folks<br />with all this talk about the machines taking over. Here's the<br />line we want you to take. . .". Or did his rabbi sit down<br />with him and have a heart-to-heart? Or [was it simply]<br />impossible for him to accept that **he** might<br />be superseded? Or was everybody spooked by Bill Joy<br />being spooked by Ray Kurzweil?<br /><br />I really, really wonder about this, you know. It's what,<br />more than anything else, caused me to lose respect for the<br />bulk of the on-line >H community. The shift went almost entirely<br />unremarked, as far as I can tell (unless the **real** discourse<br />isn't visible -- goes on in private e-mail, or at conferences,<br />or whatever). It's not **just** Eliezer, of course -- he's now<br />insulated and defended by a claque of groupies who **screech**<br />in outrage. . . whenever the party line is crossed. . .<br /><br />Ah, well. I have my own theory about this, and it's (naturally)<br />a psychological one. I think it's nearly impossible for the<br />>Hists who are "in it" for their own personal gain -- immortality,<br />IQ boosts, bionic bodies, and all that -- the [narcissists], in other<br />words -- to be sufficiently dispassionate to be Cosmists.<br />What a gaping blind spot!<br /><br />It seems utterly ironic and contemptible to me to see<br />the self-congratulatory crowing about "shock levels" when<br />the reality of the SL4 list is "don't scare the Terrans".<br />Meaning "let's don't scare **ourselves** because **we're**<br />the Terrans". :-/jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-34203016.post-4454841703429913652011-06-09T19:45:25.539-07:002011-06-09T19:45:25.539-07:00> [T]o my mild surprise, since this is someone...> [T]o my mild surprise, since this is someone I nominally respect,<br />> I have discovered that I believe he is [mistaken]. . .<br />> [H]e has figured out that AI have the potential to be better<br />> than human beings. And like any primitive throwback presented<br />> with something potentially threatening, he has gone in search<br />> of a benevolent diety (the so-called Friendly AI) to swear<br />> fealty to in exchange for protection against the evil god. . .<br />><br />> Rationality dictates there be an orderly transition from<br />> a human-based to an AI-based civilization and nothing more. . .<br />> Demanding that a benevolent god keeps homo sapiens sapiens<br />> around until the stars grow cold is just chauvinistic provincialism.<br /><br /><br />In a couple of blog comments from a few years ago I wrote:<br /><br />http://amormundi.blogspot.com/2007/10/superlative-schema.html?showComment=1193456700000#c2866593194812929903<br />-----------------<br />> I would be wary of [applying the word 'cultist' to anyone] who<br />> holds a view that strong AI is possible, likely to be<br />> developed within the century, and extremely relevant for<br />> the long-term well-being of humanity.<br /><br />You know, there have been three distinct phases in the conceptualization<br />of that relevance in the case of the "strong AI" advocate<br />[i.e., Yudkowsky] whose voice has been the most powerful siren-call to<br />"activism" among the on-line >Hists. . . over the past decade.<br /><br />The first stage, 10 years ago, portrayed the relevance of AI<br />not in terms of the "long-term well-being of humanity" but<br />in terms of the long-term development of intelligence in our<br />corner of the universe. In this characterization, humanity's lease<br />was seen as likely to be coming to an end, one way or<br />another, and sooner rather than later. Out of the chaos<br />of technological transformation, and the death-agony of<br />the human race, there was the potential for greater-than-human<br />(and perhaps better-than-human, in some moral sense)<br />intelligence to be born -- the next stage in the evolution<br />of intelligence on this planet. Our duty to the future<br />of sentience, in this scenario, was to keep things going<br />long enough to accomplish that birth. . .<br />It was seen as a race against time.<br /><br />I was **very** attracted by this mythos. It had a noble<br />impartiality and a kind of Stapledonian grandeur. And I<br />found it plausible enough.<br /><br />The second stage, a few years later, seemed to me to have<br />lost its nobility, its grandeur, and any claim to<br />plausibility it may once have had. In this scenario, AI was<br />seen as the deus-ex-machina that would **solve** the problems<br />threatening the extinction of the human race. Not only<br />that, but there was a sudden shift in emphasis toward the<br />personal immortality hoped for by the cryonicists. . .<br />Suddenly the moral imperative became: every second the Singularity<br />(i.e., strong AI) is delayed equates to the death<br />(the **murder**, if we don't do our duty and create that<br />software) of X human lives. This marked the shift to<br />the Twilight Zone for me. . . <br /><br />The third stage, it seemed to me, has moved even further<br />away from reality, if that's possible. Now the primary threat<br />(the "existential" threat) to the human race isn't anything mundane<br />like nuclear weapons or climate change, it is AI **itself**. . .<br />Now the imperative becomes how to **mathematically<br />prove** the "Friendliness" of AI, all the while discouraging<br />"irresponsible" AI researchers. . . from unleashing the apocalypse.<br />By this point, my disappointment had turned to outright scorn.<br /><br />It also seems a little too convenient to me that the claim<br />"I know how to do it but I can't tell you until I can be<br />sure it's safe." relieves the pressure of actually<br />having to produce anything tangible.jimfhttps://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-34203016.post-1956836355480541232010-03-17T19:28:34.965-07:002010-03-17T19:28:34.965-07:00I really think this post just shows that you haven...I really think this post just shows that you haven't worked through the issues carefully, perhaps because (like most people) the author is not a hibitual probabilistic thinker. Even a .001% projected chance that strong AI turns out to be the best or worst thing that happens to the human makes it worth thinking seriously about what the deciding factor will be. That's something that Mr. Yudkowsky has tried to do, and the author clearly has not. I just heard of the fellow a couple months ago, but his conclusions in broad terms match mine pretty closely.<br /><br />As to enslaving AIs, it's a serious issue, but not a straightforward one. Most designs of AI would not likely go on to lead interesting lives of their own - they'd extrapolate on their starting goals to produce something simple, that happens to use all the resources of the world, killing humans by accident, not explicit malice or any other human emotion.<br /><br />The AI advantage: ability to quickly self-improve; ability to self-duplicate.<br /><br />I hate to say it, but this post is so clueless that I wonder if I've just been trolled. Read the actual arguments, from Yudkowsky and others - your reasonable intuitions that the future will be a lot like the past, or change slowly, may not be right. I give it an oh, say 25% chance of outright singularity, not necessarily on the first machine launched, and within 10-60 years (not a standard distribution). Those odds and timescale makes it worth worrying about.Simulation Brainnoreply@blogger.comtag:blogger.com,1999:blog-34203016.post-20174290229824526592009-10-09T21:10:24.805-07:002009-10-09T21:10:24.805-07:00hmm, I was reading this in hope of finding an inte...hmm, I was reading this in hope of finding an intelligent and articulate person taking a different point of view from EY. Unfortunately RK does quite the opposite, albeit inadvertendly one would assume. Or maybe not? It just might be reverse psychology big time.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-34203016.post-44971054887967227772009-06-07T19:24:58.424-07:002009-06-07T19:24:58.424-07:00Richard, didn't you say you were going to chan...Richard, didn't you say you were going to change the world with something? Like, whatever happened to that? <br /><br />p.s. I love how you worked a nuclear power plant into your argument.... predictable.YouWillJustCallMeAFanboyhttp://bit.ly/Mw9ZVnoreply@blogger.comtag:blogger.com,1999:blog-34203016.post-8252840392734287512009-06-01T11:56:30.766-07:002009-06-01T11:56:30.766-07:00taw was polite, reasoned, and articulate. You may...taw was polite, reasoned, and articulate. You may choose not to respond but it does not reflect highly upon you or your blog.Michael Bishophttps://www.blogger.com/profile/13144870793323861373noreply@blogger.comtag:blogger.com,1999:blog-34203016.post-30269128604724253672009-01-10T09:13:00.000-08:002009-01-10T09:13:00.000-08:00Oh wow, a response by a Yudkowski fanboy. Will my ...Oh wow, a response by a Yudkowski fanboy. Will my heart ever settle down from its pitter patter?<BR/><BR/>It's a tedious defense of Yudkowski and a tedious attack on me. Wrong in all sorts of tedious ways.<BR/><BR/>Not worth responding to. Just barely worth the effort of a put down.Richard Kuliszhttps://www.blogger.com/profile/05450367878517586463noreply@blogger.comtag:blogger.com,1999:blog-34203016.post-3348283951027283852009-01-10T03:24:00.000-08:002009-01-10T03:24:00.000-08:00I don't think you even understand what Eliezer is ...I don't think you even understand what Eliezer is saying. Not that I'm blaming you, as he wrote over 9000 posts with snippets of it, and there's no one place with his whole argument.<BR/><BR/>The first thing you and EY disagree on is that you think humans have massive differences in intelligence, and <A HREF="http://www.overcomingbias.com/2008/05/my-childhood-ro.html" REL="nofollow">Eliezer thinks they're really tiny and insignificant on cosmic scale</A>. You list this as one of your assumptions, but to Eliezer it's about as wrong as arguing for creationism. This seems to be the main point of your argument, and I really don't see how anybody could assume what you're assuming.<BR/><BR/>As for keeping AI in the box, Eliezer did run some experiments, and according to what he says it seems that even without vastly transhuman intelligence <A HREF="http://yudkowsky.net/singularity/aibox" REL="nofollow">it's not that hard to get out of the box</A>.<BR/><BR/>As for the isolated groups vs massive cooperations, well that sounds like a weak part of Eliezer's argument. EY and Robin Hanson discussed it <A HREF="http://www.overcomingbias.com/2008/12/what-core-argument.html" REL="nofollow">over and over again on Overcoming Bias</A>.<BR/><BR/>As for the "AI goes FOOM" part, Eliezer claims that the only realistic hopes for AI is building a crude prototype that can rewrite itself, and if it can rewrite itself it can become really powerful really quickly. Nothing ludicrous about that - the very first cell gave birth to all life, the very first intelligent being evolved on Earth in running the planet, the very first mid-sized computer network became the Internet etc. Things that start crude but can adapt can go way beyond their initial state. You can argue if it's true with AI, but there's nothing ridiculous about this.<BR/><BR/>Eliezer's thinking that we need to care about Friendly AI now is based mostly on his belief that there's a chance it will grow very quickly from seed state. This might not happen, but if you agree with his assumption that it's possible then his fear becomes really obvious.<BR/><BR/>tl;dr version - you rant about something you never even bothered to read about.tawhttps://www.blogger.com/profile/16972845140253292628noreply@blogger.comtag:blogger.com,1999:blog-34203016.post-5262660980480915012008-10-10T21:21:00.000-07:002008-10-10T21:21:00.000-07:00Spolsky didn't invent the 'leaky abstraction' and ...Spolsky didn't invent the 'leaky abstraction' and he should never get credit for it. We are using the term in the same way - it's not me that's copying him. Furthermore, Spolsky isn't viewed too highly by experienced programmers and he sucks from a designer's perspective also. Reading him is the equivalent of reading Romance novels.<BR/><BR/>It doesn't matter whether you call it conditioning or programming. At a fundamental level, all a computer does is switch voltages. AI software doesn't do logic because it's in the hardware, it does it because it's programmed to. And whether you call it programming or conditioning makes no difference.<BR/><BR/>Diverting into association for the moment, you should know that association causes the subconscious and the subconscious is responsible for creativity. The piddling little of it that nearly all people possess, and the great amount that rare people possess. Without association there is no creativity so the subconscious which you may see as bad is an unavoidable cost of not being autistic.<BR/><BR/>IOW, your purely logical AI would be autistic. And almost certainly non-functional. It's not just the case that a purely-logical AI sounds like autistic people. It's /exactly like/ autistic people.<BR/><BR/>I believe AI judgement would work in exactly the same way as human judgement. That is, it requires both creativity and logic, and isn't possible when one or the other are absent.Richard Kuliszhttps://www.blogger.com/profile/05450367878517586463noreply@blogger.comtag:blogger.com,1999:blog-34203016.post-91473607219678229142008-10-10T16:46:00.000-07:002008-10-10T16:46:00.000-07:00Well I think what I was thinking about was trying ...Well I think what I was thinking about was trying to conceptualize what intelligence looks like from the inside (i.e. what would it be like to be this particular intelligence) without having the shortcomings our our evolved neurological structures.<BR/><BR/>I'm not a comp sci or AI person, so I wasn't necessarily thinking about it in those terms, and it may be that you conceptualize your own thought processes in a way that is significantly different than my own, but I think the speculation I was having was that it's difficult to conceptualize a "pure" logical consciousness through the haze of our own conceptual mechanisms which are rooted in our biology.<BR/><BR/>As you say, your own biology may not get in the way of you thinking in purely logical terms (if you're using leaky abstractions in the way that Spolsky does), but as indicated in the depression discussion, it definitely has implications to our qualitative judgement mechanisms. Which may be one of the leaks you're talking about.<BR/><BR/>Like I said, I'm not an AI guy, and this stuff might be discussed in that literature, but I find it fascinating to think about whether a consciousness founded on logical principles would develop a system for qualitative judgements and a result of it's own evolutionary journey through selective pressures.<BR/><BR/>I've always seen logic as an overlay on top of our neurological structures, i.e. something that we learn, and in some ways, condition ourselves to thinking, but I think it's a different thing to have a consciousness that doesn't compute that way as a learned function secondary to it's hardware, but as a fundamental property of its hardware. I don't have any concrete reason to believe it will be any different, it's just idle wondering on my part.Joehttps://www.blogger.com/profile/18001718392209627691noreply@blogger.comtag:blogger.com,1999:blog-34203016.post-67646855929237314282008-10-10T13:11:00.000-07:002008-10-10T13:11:00.000-07:00Agreed. A necessary precondition for any kind of r...Agreed. A necessary precondition for any kind of rational plan is to get away from a planet on the verge of self-annihilation. The asteroid belt provides enough resources to do almost anything you want. And an AI's needs in terms of life-support are pretty minimal. Basically, radiation shielding. The most dangerous thing an AI is likely to do is launch a nuclear powered Orion starship from the surface of the Earth. And that isn't even significantly dangerous.<BR/><BR/>As for learning, what makes you think the fundamental basis of computation will have any impact on the higher level abstraction of an intelligent mind? Maybe there'll be abstraction leakage, maybe not. After all, while most humans are magical thinkers (association, opposition, essentialism), a big minority are logical thinkers (implication, contradition, structure). Looking at my own mind, I can say there are far bigger abstraction leaks of its neuroanatomy than associative learning. So much so that the peculiarities of associative learning aren't really visible.Richard Kuliszhttps://www.blogger.com/profile/05450367878517586463noreply@blogger.comtag:blogger.com,1999:blog-34203016.post-17044422041402958632008-10-10T11:16:00.000-07:002008-10-10T11:16:00.000-07:00I've always found this field interesting, and alwa...I've always found this field interesting, and always thought that science fiction provided unsatisfying speculation about what non-human consciousness would be like. Our tendency to anthropomorphize is just overwhelming. But I think we also tend to find things that are extremely alien uninteresting.<BR/><BR/>It is fascinating to think about what might be fundamental principles of intelligence and how machines wouldn't necessarily be bound to associative reasoning or any other artifacts of the biological processing that we have. <BR/><BR/>But it's hard to argue that those intelligent machines that exist are those that have some imperative to exist whether evolved or programmed. I've always speculated that machines if bounded by pure logic would ultimately just decide that existence wasn't really all that desirable a state to be in and just shut down. And those that felt an imperative for self preservation would get away from us as soon as possible.Joehttps://www.blogger.com/profile/18001718392209627691noreply@blogger.comtag:blogger.com,1999:blog-34203016.post-91774430957795903232008-09-30T14:52:00.000-07:002008-09-30T14:52:00.000-07:00I was thinking of a passage where he seemed to imp...I was thinking of a passage where he seemed to imply that some other guy was way smarter than him. But no matter, it's not an important issue. Neither is Eliezer's intelligence except insofar as he discredits himself further every time he brings up his ability to memorize. Because that and scratch memory size is all intelligence is.<BR/><BR/>Likewise, the torture vs dust specks. The only reason it's important is because it provides a reductio ad absurdum to utilitarianism. Thus opening up morality to inalienable rights which are based on transfinite numbers instead of finite numbers. IOW, the only reason it's important is something Eliezer never mentioned. I also have serious doubts that Eliezer is on the right side of the fence on this issue.<BR/><BR/>On the subject of FAI, I take a more nuanced view. You see, systems such as CYC are already abject slaves to humans. Its two directives are to seek out internal contradictions and to answer any questions posed by its handlers. So it can't be that engineering friendliness is a form of slavery since we already have slavery by default.<BR/><BR/>My problem with FAI is with the notion of an AI being friendly with the human species as-is. Especially with all the idiots in the human species. Idiots among whose ranks Eliezer now has an honourary position. The lack of self-awareness involved in what he's doing is pretty pathetic.Richard Kuliszhttps://www.blogger.com/profile/05450367878517586463noreply@blogger.comtag:blogger.com,1999:blog-34203016.post-9618573528237782132008-09-30T11:54:00.000-07:002008-09-30T11:54:00.000-07:00He is humble, I believe, in a kind of fasion where...He is humble, I believe, in a kind of fasion where he tries like hell to impress his shear intelligence among people, while still being humble in the sense that he's always acknowledging that he's dumber than his Bayesian superintelligence.<BR/><BR/><BR/>"When Marcello Herreshoff had known me for long enough, I asked him if he knew of anyone who struck him as substantially more <I>natively intelligent</I> than myself. Marcello thought for a moment and [said no]. . . Not what I wanted to hear."<BR/><BR/>"<I>You're still definitely the person who strikes me as inhumanly genius - above all else.</I>"<BR/><BR/>"<I>Wait wait wait wait. Eliezer...are you saying that you DON'T know everything???? ~runs off and weeps in a corner in a fetal position~</I>"<BR/><BR/>"<I>...aspiring to your level (though I may not reach it) has probably been the biggest motivator for me to practice the art.</I>"<BR/><BR/>"<I>Up to now there never seemed to be a reason to say this, but now that there is: Eliezer Yudkowsky, afaict you're the most intelligent person I know.</I>"<BR/><BR/><BR/>... it goes on for a while.<BR/><BR/>I don't believe he set out to create a cult. Maybe he just set out to write his thoughts down in a blog, but many of his readers responded in a way that was... unwarranted. It's a little embarassing to read.<BR/><BR/>Well, I'm exaggerating a bit - quite a bit, perhaps (although those comments in italics are verbatim quotes). <I>Cult</I> is too strong a word. There are voices of dissent. But he's got many of his readers thinking in a certain way about his shear authority on the matters of Bayes, AI, and intelligence, so when he proposes something utterly ridiculous like <A HREF="http://www.overcomingbias.com/2007/10/torture-vs-dust.html" REL="nofollow">Torture vs. Very-small-irritation-of the eye</A>, people think that this is an important question worthy of much discussion - rather than outright dismissal. Likewise for his promises of immortality.<BR/><BR/>Friendly AI is essentially his attempt to make a design for an AI which would not only keep humans alive, but make them immortal and share with them (essentially an AI that would be a slave to human overlords). I think, aside from your point that FAI is a knee-jerk fear response, there's something fundamentally wrong with the premise of this work: the objective is outrageously selfish. He's humble in some regards, sure - but his life's work stinks of human hubris.pswoohttps://www.blogger.com/profile/14925551246066919522noreply@blogger.comtag:blogger.com,1999:blog-34203016.post-1127259500310803602008-09-30T07:26:00.000-07:002008-09-30T07:26:00.000-07:00Other than me, there aren't any counter-examples a...Other than me, there aren't any counter-examples and that's really quite remarkable. And I don't count because *of course* I would seek to be a counter-example to a universal law.<BR/><BR/>As for Eliezer, well from what you say it's worse than I ever imagined. I've seen Eliezer be (or seem to be) humble so I'm not sure how serious he is about that cult crap. But it doesn't matter since my response to worship is to destroy my followers. And by that standard, Eliezer takes being a cult leader entirely too seriously.<BR/><BR/>Collect friends, destroy followers, seems to me to be a requirement for psychological health and well-being. In any hierarchy, only one guy can be at the top and that's why power over others can never be moralizeable.<BR/><BR/>That's actually why I favour the extermination option. Because the only alternative is to make a hierarchy of citizenship, with first- second- and third-class citizens. And I don't like it.Richard Kuliszhttps://www.blogger.com/profile/05450367878517586463noreply@blogger.comtag:blogger.com,1999:blog-34203016.post-3899217181892770772008-09-29T18:51:00.000-07:002008-09-29T18:51:00.000-07:00"there are still more orders of magnitude differen..."there are still more orders of magnitude difference between an average human and a top human. And somehow they've managed to coexist without trying to annihilate each other."<BR/><BR/>I won't go looking for actual counter examples at the moment - but I found this funny. In a way, <I>you</I> are the counter example - or you want to be.pswoohttps://www.blogger.com/profile/14925551246066919522noreply@blogger.comtag:blogger.com,1999:blog-34203016.post-76318160670756254792008-09-29T18:44:00.000-07:002008-09-29T18:44:00.000-07:00There's a bit of Eliezer's writing, and reasoning,...There's a bit of Eliezer's writing, and reasoning, which is very well done - I like his short fiction works. A lot of the other stuff is waayy too thick, and bordering on incomprehensible. He'll write 5-post series where a single short essay type post would suffice (on Overcoming Bias).<BR/><BR/>He has a bit of a cult following too, if you ever bother to read the comments on that blog. And I do mean <I>cult</I>. He proclaims that existential doom is inevitable should his cause fail, speaks archaically of the Way, the Void, and other things which are "thus written", and, as you mentioned, is obsessed with a benevolent deity which promises immortality for all. Oh, and he happened to write a few cryptic posts about how it is irrational to ask the question "Am I in a cult?"<BR/><BR/>Well, I'm not saying that OB is a cult. I'm just saying that 80% of the posts are made by Eliezer, a man who will openly ask the question of his readers: "Have you ever met anyone as smart of me?" But the readership couldn't be biased towards Eliezer's preaching... what they're doing there is <I>overcoming</I> bias.pswoohttps://www.blogger.com/profile/14925551246066919522noreply@blogger.com