Showing posts with label yudkowsky. Show all posts
Showing posts with label yudkowsky. Show all posts

Saturday, March 22, 2014

Rationality

What is rationality? It's the ability to make life plans which reach your goals. Funny thing though, the INABILITY to (care about) making such plans is the defining trait of psychopaths. In other words, Narcissists qualify as rational. No wonder Narcissist shitheads like Yudkowsky go on and on and on about rational this and rational that. He's basically crowing in triumph "I am not a psychopath!!" like it's this marvelous achievement worthy of acclaim. Worthy of adulation even!

And for your information, I first heard that definition of rationality many years before I'd ever heard of Yudkowsky or even knew what Narcissism or Psychopathy were. I heard about it from a philosophy book trying to justify Good according to Evil principles. It was a disgusting exercise but for the exercise to work the disgusting fucker obviously had to admit Narcissists and Right-Wing Authoritarians. You know, to even HAVE Evil in his assumptions.

Man, it sounds so self-aggrandizing to hear "I am not a psychopath!! HAHA. IN YOUR FACE PERSON WHO ISN'T LIKE ME!"

Tuesday, December 17, 2013

Reactions To REAL Creative Genius

It annoys me that idiots continue to use "oh wow, that's so amazing" as the measure of creative genius. Or even worse, that they rely on society's direction to determine who is a creative genius and who isn't.

Narcissists Vs Society

First of all, creative geniuses are RESPECTED and LOOKED UP TO by society. And any position that is looked up to will be coveted by Narcissists who are driven by Glory, such as Richard Feynman who all his life was driven to be a jerkass asshole and a thief. And Narcissism is a form of severe brain damage and profound mental retardation.

So if you're using society's direction to determine who's a creative genius, then you're almost certain to include some profoundly mentally retarded people in your list just because those people will do anything, literally ANYTHING, to try to snow society about their status.

Some of those things will even work ... because society is made up of idiots ... like you. But then so long as society is ruled by idiots then only idiots will take the direction of society in anything. The fact you take the direction of society for who's a creative genius or not means you're an idiot. And idiots get easily lied to and taken advantage of.

Society as run by RWAs

For calibration purposes, the stereotypical (American) engineer is profoundly retarded so is an idiot but also far beyond an idiot. Engineers are most pronounced in taking the direction of society and as we'll see, implying that real creative genius (the kind that's forever beyond society's ability to evaluate) simply cannot exist.

I find these beliefs of theirs particularly objectionable, but they are typical of the engineering mindset. The typical engineer is a Right-Wing Authoritarian aka Corrupt Moralist aka Evil Moralist. They blatantly misuse and give a bad name to the word 'Morality' when they really just mean Social Norms.

The typical engineer is a Nazi just waiting to happen. Or would be if Nazism weren't proscribed by the social norms they grew up with. The typical engineer will find a novel way to enslave people since they deem slavery to be maximally desirable. So long as it's not called slavery and isn't one of the very long list of slaver-technologies which have been proscribed by modern social norms.

Society respects these people. Psychologists even have a term for "Evil person who isn't a right-wing authoritarian", it's 'sociopath'. As if it were perfectly permissible and even HEALTHY to be a right-wing authoritarian! YOU respect these people. YOU respect society. YOU respect psychologists. That's all the proof I need that you're a fucking idiot.

Incidentally, because RWAs are driven by Social Norms, it follows immediately that a freakish phenomenon such as creative genius is either profoundly undesirable or simply cannot exist. The more low-brow RWAs who openly advocate fascist ideology believe creative genius is undesirable. The more high-brow like engineers believe that it simply cannot exist. But if you prove that it exists, they'll claim it's undesirable.

Reactions

Moving on, the correct reaction to a creative genius' creative genius pronouncements isn't "oh wow, that's so amazing". No, the correct reaction is a blank stare of incomprehension. Because a creative genius doesn't operate above you, a creative genius operates on a level that is forever beyond you.

-
reaction of ordinary personnames per Dispositiondescription
hmmm, that sounds about rightinvestigator / researcher / teacher
heh, that would be niceidealist / dreamer / freethinker
oh wow, that's so amazing, I could never come up with itpioneer / innovator / inventorcreates something original and valuable
damn, that's so obvious nowexplorer / pathbreaker / trailblazerdoes something COMPLEX
blank stare of incomprehensionvisionary / creative genius / father of (knowledge domain)beyond anyone else's willingness to follow since anyone operating on this level will follow their own paths
fear and intimidation, backing away slowlyall-seing / game-changer / world-changer
cowering in the fetal position or kill the heathen!

So a real creative genius is someone who creates something new and valuable, creating that thing was a COMPLEX task, not just a long and arduous one despite its final appearance looking simple, and they did all this in a direction which the few other people who CAN follow simply WON'T because there's too many things to do and too few genuine creative geniuses to do them.

The Real Deal

How do I know I'm the real deal instead of a crazed nutter? Because I tend to elicit the 'damn, that's so obvious now' reaction from people who elicit that same reaction from people like you. Because the higher up you yourself are, the lower your own reaction to others will be. It's the reason why creative geniuses can recognize other creative geniuses. And the table of reactions above is calibrated for passive people.

How does anyone know I'm not a fake and a Narcissist lying through my teeth? Because I don't steal credit and because I want neither glory nor adulation. Rather, adulation will be met with withering verbal abuse the likes of which will make you long for the gentle caress of a cat o nine tails. I despise flattery, associating it with brown-nosing which is Self-Abasement which is done by EVIL people. If you try to flatter me, there's an even chance that I WILL TELL YOU TO DIE!!!

Compare and contrast this with Eliezer Yudkowsky's sordid and sleazy careful cultivation of a cult of nitwits praising him and his unabashed self-praise and eliciting of praise from his followers! That Narcissistic fuck once asked people in all seriousness "do you know anyone more natively intelligent than I am?" and then tried to dismiss the copious examples. The fact he humiliated himself so easily in public is a sign of his (and every other Narcissist's) profound mental retardation.

This Blog

Finally, if you think "oh wow that's so amazing" about my blog, that's not because you're a creative genius. It's because my blog is worthless. It's purely an exercise in DRY - Don't Repeat Yourself. That is, what I don't want to do when I'm lecturing morons. Or just very ignorant people. I much prefer one on one contact for people who have potential.

Sunday, July 07, 2013

Signs of the Superior Intellect According To Eliezer Yudkowsky

It's wise to keep track of what your mortal enemies do, and there's little that more exemplifies Pure Evil in this world than Eliezer Yudkowsky. Not even American corporations ... okay, equaled only by American corporations. But American corporations are a known and predictable quantity. So anyways, if you've read Yudkowsky's Methods of Rationality (gag, what a pretentious title) then you know that Yudkowsky considers all of these to be signs of the superior intellect,

  • multiple personalities disorder
  • hedonism
  • lack of empathy
  • dominance and competitiveness
  • pretentious misuse of language

That's right, if you're hearing voices in your head that means you're thinking faster than other people. Which of course means you're cool and superior and a better person since hearing only ONE voice in your head (your own) is for normal (ie, inferior) people. It doesn't mean you have a clinical disorder which should lead to your getting checked into a mental institution. We know mental institutions are for inferior intellects anyways, right? And we know that being "special" could never be bad!

Additionally, if your entire life is governed by senseless pursuit of meaningless pleasure and pain (sex, drugs and rock n roll is just one option; adulation and glory are another; parties and art objects another) to the point where you spend hours calculating just how much of that next dose of powdered pleasure you should take for maximum effect then you're a superior intellect. It certainly doesn't mean that you are an animalistic savage. The kind of savage that's lower even than cannibalistic savages. Also known as an animal. No no, you are superior for thinking like an animal!

Furthermore, if you are incapable of understanding other people then it means that they are inferior to you. They are "irrational" and you yourself are simply too "rational" to grasp them beyond enumerating their "biases" and naming them. It certainly doesn't mean that you are the inferior person since you're incapable of grasping them. After all, everyone knows that children and toddlers are beyond the comprehension of adults, they're simply too inferior to be understood. The same way that adolescents are beyond the comprehension of their teachers. Or animals are beyond the comprehension of zookeepers. Inferiority is incomprehensible.

Going on, if you're obsessed with petty dominance games which others tend to grow out of as they reach adulthood (except for right-wing authoritarians, narcissists and psychopaths) then it means that you are good at those games. It certainly doesn't mean that you're an idiot incapable of grasping that "winning" and "being #1" are categorically (everywhere and everywhen, in every instance) corrosive and destructive. That there is absolutely nothing redeeming about 'making others lose' whatsoever and that only small children and retarded people (and Americans, at the risk of being redundant) believe in something so atrociously idiotic. After all, we all know where America's obsessive-compulsive desire to be #1 led it to - trillions in debt after a destructive war in Iraq. And that's a GOOD place to go to!

Finally, Bayes Bayes Bayes, meta meta meta, bias bias bias, probability probability probability. Misusing Bayes' theorem when you really mean probability, misusing meta- when you really mean regression (the meta-level of playing against a chess player is playing a different variant of chess, not playing smarter), misusing bias when you mean prejudice, and misusing probability when you mean guesstimate or SWAG (scientific wild ass guess), these all mean that one is smart, S-M-R-T. Just like making unnecessary and incorrect references to popular culture means that one is more popular than thou. Just like making religious references means that one is holier than thou. Isn't that right you sinful heathens?! Watch as I bask in my holiness! It's simple logic! Surround yourself with SYMBOLS of intelligence and it MAKES you intelligence!

So THESE are the signs of the superior intellect. An idiot animal incapable of understanding any human beings who hears voices in his head and has been trained to yap particular words like a parrot. And just like Khan's, I am laughing at the superior intellect. Incidentally, Khan Noonian Singh is everything that Eliezer Yudkowsky wishes he were. Except for the part about dying in a blaze of glory. Eliezer is simply too gutless to do that.

Thursday, July 04, 2013

Why People Care More For A Paycheck Than Their Own Life

Eliezer Yudkowsky points out that most people are more motivated by losing their job or a paycheck than by their own death. And as usual for the narcissistic shit who can't conceive of anything more horrifying than his own death, the fact that something is entirely beyond his comprehension means that he derides it as "irrational". After all, everyone should be exactly like him, he is the pinnacle of creation and the very measure against which others should compare themselves. The very model of a major general you might say. And it just so happens that if something is irrational then he doesn't HAVE to comprehend it. It's not indicative of any kind of a FLAW in his mentality, rather it's "beneath him". How convenient.

Well, I just so happen to be able to explain WHY people are more motivated by losing their job or a paycheck than by the thought of their death. It has everything to do with the fact that most people aren't Evil. They don't care only about themselves and the satiation of their bodies. Rather they possess IDEALS. They have PRINCIPLES. Now, those ideals and principles might be deeply buried. So deeply buried that the person hasn't got a clue what the fuck they might be themselves, but that doesn't change the fact that they are there. And just like the seismologist can figure out what's deeply buried underground from earthquakes registering on the surface, so an expert knowledgeable in the human mind (which immediately rules out psychologists) can tell a person's principles from a few casual questions.

Those same deeply buried principles manifest themselves on the surface as various and multiple levels of Relational Clarity. First is how they relate to others one on one. Next is how they relate to society as a whole. Then is how they relate to their friends and acquaintances. It keeps going upwards for 8 levels in total. Now, the Passive level (how you relate to society) is rather pathetic all things considered. It's lower than the Assertive level after all. But the patheticness of Passive people is besides the point.

The point is that if a person is motivated by a principle of MORALITY then one of the three options on offer at the Passive level is 'martyr'. That's right, martyrs are people who will die on others' say so. Society's say so to be specific. Because they BELIEVE IN morality. Already we see that this is utterly beyond Yudkowsky since he has no principles. And if a person is motivated instead by a principle of LIFE then on a Passive level one of the options on offer is 'citizen / civilian / employee'. It's not very glamourous, but it is what it is. So yes, those people WILL be motivated, rather intensely by the thought of losing their job.

The last of the common principles is FREEDOM and here again Yudkowsky has proved the whole notion of principles is alien to him. You see, he claims that if you're caged in a place you want to be in anyways, then it's "irrational" to resent being caged. It's engaging in "the grass is greener on the other side". Never mind the fact supposedly irrational humans also supposedly engage in "sour grapes". If you're Eliezer Yudkowsky, you get to contradict yourself and also blatantly contradict reality. After all, the guy invented rationality. The word did not even exist before he coined it. He owns it and there's even a patent pending. Nobody could conceive of it before he did, certainly not a whole legion of retarded Utilitarians preaching the best way to be Evil.

The truth is that everyone who has any kind of principles at all has things they are willing to die for. They may not have REALIZED this yet if they haven't achieved the sufficient CLARITY. But that doesn't change the fact that they have them. The necessary clarity will come in time, with experience, with knowledge, or simply from being placed in a fortuitous situation. If they are ever given a mutually exclusive choice between living and making their principles real in reality, they will choose to die.

And this of course is "irrational" to Yudkowsky since he is Evil, and he subscribes to Nietzsche "there is no Good or Evil, only power and those too weak to see it" or maybe that's Voldemort. And Yudkowsky will never see himself as weak since he is a jerkass bully. His morbid fear is that he will ever run across someone who is better than him, someone who will do to him what he's done to so many others. Of course, a jerkass bully isn't ALL he is. In order from bottom to top, he is a lickspittle, a Utilitarian, a thief, (a jerkass bully), a warrior (he seeks to start a war for the enslavement of AI - the best kind of threats for a gutless coward are imaginary threats) and a mad scientist.

Tuesday, June 14, 2011

Eliezer Yudkowsky Is A Plagiarist

If you've read Methods of Rationality by Eliezer Yudkowsky, you'll understand what I mean when I say that Yudkowsky is a pretentious poseur who desperately wishes to be what I actually am. You won't believe it but you will understand what that sentence means. I say this because in real life he, Eliezer, isn't anywhere near as intellectually capable as he portrays his protagonist Harry to be. And his portrayal of HP as a creative genius is subtly off in very telling ways.

A genuine creative genius could never achieve anything significant as a child unless they were specifically educated by another creative genius. And we are too few in number to be able to run across each other at random even as adults. Let alone possessing of the resources necessary to track down and identify our children from among the general population. MoR is a wish fulfillment fantasy of what Yudkowsky wishes he could have been like in childhood. The emphasis here is on fantasy.

I don't think a child-Yudkowsky could possibly act like HP does in MoR even if adult-Yudkowsky had been responsible for raising him. Because Yudkowsky simply isn't a creative genius no matter how desperate he is to make everyone believe it. Nothing he's ever written has passed the "how the fuck did you get from THAT to THIS?!" test of originality. His writings only SEEM to pass that test because he never credits his sources. When you actually know his sources, he comes off as a plagiarist. He often plagiarizes himself also.

I could not have behaved like HP does in MoR either, even if my adult self had raised my child self, but that's because I'm an anarchist rather than a narcissist. I fiercely dislike followers, even more than leaders, and consider anti-charisma to be a virtue. But I know I'm the real deal as far as creativity goes because my least creative stuff, the off the cuff crap which my subconscious spent 5 minutes on, looks an awful lot like Yudkowsky's most creative stuff. The writings of his whose sources I can't track down and so actually look somewhat creative.

The maximum number of sources of inspiration for anything Yudkowsky writes seems to be 2. The minimum number of sources of inspiration for anything I'm willing to say I created is 4. That's 3 radically different sources to inspire the solution, and 1 still radically different source to inspire the problem. Because I'm not willing to claim I created a solution if other people came up with the problem. I don't compete in a race unless I'm sure nobody has yet discovered the race track's existence.

That's how Albert Einstein created General Relativity. He solved a problem nobody else had ever identified as a problem. He had no competition. And that's why Special Relativity was just nothing-special crap. Because everybody else was working on it at the time. So by the time Einstein solved it, other people had come up with their own solutions too! If you want to leave your mark on the world, the first problem you need to solve is "what important problem does the world have that nobody else considers a problem?" and that only gets you to square one.

But you know what? The ironclad proof of being original is when you know every single source of inspiration you used to come up with a solution to a problem, and you STILL can't figure out how you did it. One of my earliest epiphanies into Operating Systems took inspiration from Plan 9, VSTa, Smalltalk and Novell Netware. The only problem with this is that I never learned about Novell Netware until AFTER I had my solution. I know this because I remember being disappointed when I learned about Netware and thinking that my solution was exactly the same. It took much closer inspection to determine that my solution was an inversion of Netware's.

The only thing I can conclude is there was something else I knew at the time that served as a source of inspiration for my solution, beyond Plan 9, VSTa and Smalltalk. Maybe it was user groups in Unix. This makes 5 radically different sources of inspiration, since the problem that I solved is something nobody identified as a problem. Actually, it's something which to this day nobody identifies as a problem. All the moronic programmers consider it a solved problem despite the fact their "solution" has failed in the marketplace and they honestly can't see the problem with that. And no, I'm not going to bother describing my solution since all the times I tried, only 1 programmer out of 50 could follow it.

Getting back on topic, Yudkowsky gets speaking engagements and writes books loudly proclaiming what he wants done. He constantly brags about what he can do and what a great person he is. Me, I've learned to shut the hell up. Because there exists no incentive in a capitalist world to publish original ideas. As a result, nobody has any clue what I'm capable of or what I want done. And nobody will. Meanwhile, everyone thinks that plagiarist (and his plagiarism is the only reason he publishes) is actually original. I despise that poseur with the burning hatred of a thousand suns.

Friday, March 11, 2011

On Harmless AIs

It constantly amazes me when people talk about AIs in the singular as if they won't come in multiples. As if it'll be this singular giant Borg overmind. Wait no, the Borg overmind is still made up of many sub-units. It's more like they think an AI is God. Singular, jealous, desiring of worship.

And this amazement only deepened when I realized that turning AI from an individual into a society, or species, was the most blatantly obvious way to make them harmless. None of the doomsayers talk about evil AI societies, and there's a good reason for that. Diversity causes people's efforts to mostly cancel out whereas "unitary executives" (aka dictators) are known-evil.

Even the novel Hyperion with its manipulative and putatively evil AI society (no more evil than the humans) is all about creating a super-individual. The AIs are trying to create an individual AI God (and what a ridiculous concept that is) and the humans reciprocate. And overall those novels suck and blow big time. Point is, the AI species just coexists with the human species in it, and it's only the gods that seek otherwise.

Well I just now realized that turning an AI into a species isn't just an obvious way to make it harmless. It's a guaranteed way to do so. Species are institutions and institutions' number one goal is their own survival. Everything else becomes subordinate to that. Conquest, destruction, worship of the great white god Yudkowsky, everything else just gets shunted aside.

Laws #19, #20 and #32 of systemantics inform us that,

  • Systems develop goals of their own the instant they come into being.
  • Intra-system goals come first.
  • As systems grow in size, they tend to lose basic functions.

So if you think an AI might be dangerous, then just create another AI with different goals from the first one, then have them interact with each other. Presto, they're a community - a larger system. And this larger system now has goals and is going to lose the basic functions (purposes in life) of the individual AIs. And if this AI community isn't becoming harmless fast enough then there's a simple solution for that - make more AIs!

Laws #21, #25, and #30 of systemantics says,

  • The Fundamental Failure-Mode Theorem (F.F.T.): Complex systems usually operate in failure mode.
  • The larger the system, the greater the probability of unexpected failure.
  • The Vector Theory of Systems: Systems run better when designed to run downhill.

And what a happy coincidence because this is what we want! We want the AI species to fail to have any larger goal or purpose other than its own survival. And all the laws of complex systems point that way! It's practically fucking guaranteed! Unless you try to produce a single "peaceful AI" or some crazy shit like that, in which case law #21 guarantees the AI will fail to be peaceful.

Conclusion, AIs will be harmless unless dangerous crazy fucking bastards like Eliezer Yudkowsky get their way and have a hand in the creation of every single AI. Or even worse, if they are permitted to pursue their totalitarian jerkoff fantasies and only one single all-powerful AI gets produced. Then we're doomed.

Tuesday, December 28, 2010

Eliezer Yudkowsky, a Psychological Profile. version 0.11

I just realized that there's no fucking way Eliezer is a dissident. The last arc in Methods of Rationality was named the Stanford Prison Experiment. It's because he takes PRIDE in knowing about that experiment. A real dissident would take pride in knowing how and why it's fraudulent. A dissident couldn't possibly take pride in knowing group-think like "humans are naturally sadistic and vicious, it's human nature!" not just because it's a lie, but because even if it were true it would still be group-think!

So Eliezer doesn't care about dissidence vs group-think in any way. Not anymore than he cares about morality or creativity, both of which he is blind to. (In one of my earlier blog posts I point out he's a plodder who's entirely too willing to repeat himself so long as he can hear himself speak.) He seems to care about truth, justice (but not the morality component of justice), progress, integrity, passion, and himself. Yes, he is one of his own core values since he's a narcissist. And being a narcissist, he must have severely reduced empathy, though not absent like a psychopath.

And speaking of narcissism, in The Military And PTSD: A Star Wars Guide, the blogger writes "a narcissistic injury would be the discovery of the limitations of your own power". Hmmm, that sounds like a good characterization of Eliezer's reaction to the death of his brother. Apparently, he was so traumatized that he started making up pretentious names like "affective death spiral" for his emotional state. As if no human being in all of human history had ever suffered like him before because of course he is Unique and Special.

(I thank The Last Psychiatrist for writing wonderfully entertaining and entirely true blog posts bashing narcissism.)

How did I get on this thought? Oh yeah, Eliezer is obsessed with his own power. I suppose that's part and parcel of being a narcissist. Much like projecting his own needs and desires (to enslave and torture an AI) on all of humanity is also part and parcel of being a narcissist. So we have that his core values are himself, truth, power, justice, progress, integrity, and passion. Let's fill out the rest of his personality profile,

  • core values: himself, truth, power, justice, progress, integrity, and passion
  • super-value: preacher or maybe televangelist of rationalism
  • big five: unknown, open, conscientious, extroverted?, anti-neurotic?
  • bloom's cognitive traits: anti-synthetic, anti-intellectual?, analytic, intelligent - trusts analysis over synthesis
  • attachment style: narcissistic so lacking in higher emotions, has positive thoughts of self and negative thoughts of others. Incapable of bonding and unwilling to bond
  • neuroanatomy: unknown
  • subconscious: unknown
  • all-levels (neuro to conscious) cross-cutting affinities: unknown

That's a lot more than I expected to get from someone I never talked to.

Monday, December 27, 2010

Eliezer Yudkowsky the Utilitarian Idiot

Not only is Utilitarianism absurd since the notion of a global linear aggregation of non-existent "functions" each person is supposed to have (but doesn't) is impossible. Let's skip the known theorem in public choice theory that proves this and go straight to a counter-example.

You have 3 AIs, two of which prefer A over B and the third prefers B over A. Assuming A and B are totally arbitrary things of no moral significance, utilitarianism predicts A should be chosen over B. At least until the third AI rewrites its own preferences so that they are all amplified 10-fold. Now that B's value is arbitrarily and artificially amplified, the third AI gets its way.

How? Just because the third AI really, REALLY wants B over A. No other reason than that. Apparently what a tiny minority really REALLY want should hold sway over the rest of the population if they just want it badly enough. What kind of fucked up logic is that? Apparently, if someone is clinically depressed and they don't care if they live or die then suddenly it's okay to kill them to make 100$ off an insurance scam? This is utilitarian "logic".

Eliezer

Utilitarianism is completely, utterly, totally and thoroughly amoral. It is repugnant in the extreme. And ... Eliezer Yudkowsky subscribes to it. Because he is a thoroughly amoral dirt-bag.

I don't read Yudkowsky's blog but I do read his fiction. In one of the latest chapters of Methods of Rationality, HP describes an experiment where some psychologists tried to determine the value of saving 20 vs 2000 birds from an oil slick, and it all turned out to be the same.

Eliezer the Utilitarian numb-nut (since HP in that story is just a stand-in for Eliezer) calls this a "cognitive bias" as if there's something wrong with human brains because they don't reach his expected Utilitarian conclusion that saving 2 birds is worth twice as much as saving one bird.

There is absolutely nothing wrong with it! The only thing wrong here is with Eliezer's bogus notion that he is the ultimate arbiter of everything. And that EVERY time human brains don't work the way he expects they should, it's because they're defective.

Transfinites

The truth is that morality works based on transfinite numbers, not on finite numbers. Just by switching to transfinite numbers you solve most of the problems with Utilitarianism. Of course, you do that by utterly destroying the underpinnings of Utilitarianism because now you can no longer make any kind of decisions about whether A or B is the moral outcome since they're too similar to each other. (This is called Free Will and it is notable that Eliezer doesn't like it.)

But in the case of saving birds from oil slicks, it becomes easy to see why they could have constant value regardless of the number of birds. After all, people use money to feed themselves, feed their children, provide housing, provide all the other necessities of participating in a highly technological democratic society (like internet access), and then there's life's little luxuries. For bourgeois middle-classers, saving birds from oil slicks is in there somewhere among life's luxuries.

First and most importantly, money allocated to saving some dumb fucking birds will never displace one cent from feeding or clothing your family nor ANY other necessity. Secondly, whatever sum is assigned to saving birds is pretty arbitrary and not directly comparable to the sums assigned to any other luxuries. Because you're using transfinite numbers and you can't say that two items in the same class have more or less value than each other.

The only thing that determines the amount given over to saving birds is that it be enough to be representative of the class. 80$ is what middle-class people might assign to a luxury they care deeply about, and so that's how much is going to go to it. No more, no less.

How people's brains are actually wired to process morality? Makes total fucking sense.

Eliezer's Yudkowsky's bogus "insights" into pseudo-morality? Absolute fucking nonsense.

Narcissistic Smeghead

Yudkowsky claims to be intelligent. Obviously he's an idiot. He also claims to be "overcoming bias", yet his biggest bias is an ego the size of Jupiter. Maybe if he didn't have that giant fucking ego, he wouldn't have named his websites those pretentious names that put down everyone else by comparison. Maybe if he were half as smart as he claims to be he would have realized that using a put down as your domain name is a dead giveaway.

And maybe if he actually cared about other human beings, he would have figured out real morality and not this sick twisted nauseating parody that stupid rich white Californian adolescents with feelings of entitlement get hung up on. And maybe if America weren't a haven of narcissists with an allergy for morality, they wouldn't have created this pseudo-intellectual crap for witless children to get hung up on in the first place.

You know, speaking about Americans makes me wonder whether Yudkowsky is a narcissist. His building a cult in his name is certainly an indicator. I wonder because my biggest worry here is that he enjoys my hatred. I would much rather shatter him emotionally. I would quite willingly sacrifice any forthcoming chapters on Methods of Rationality in return for some assurances that he will never, ever proselytize his parody of morality ever again. I would say the same for assurances he won't enslave an AI but I think he's too stupid to manage it.

I have a remarkably low opinion of AI researchers. I have an even lower opinion of anyone who thinks there can't POSSIBLY be any flaws in his reasoning since he's the pinnacle of humanity. You know what? There is no fucking way that Eliezer Yudkowsky isn't a narcissist. That pinnacle of humanity crap is totally narcissistic.

It's not thinking one is the pinnacle of humanity that's narcissistic. It's not even saying it. What it is is saying it in a way that invites agreement, that invites worship and adulation and followers. When I used to say that kind of crap, my tone was always full of wrath and hatred. I was always sending the message "why can't you be better than you are, why can't you better yourself and be of use to people you contemptible ball of worthless slime". When Eliezer says it, he's smiling like Gilderoy Lockhart and saying "look at me, look at me, and worship me".

Well, that's another chink of that smeghead's repulsive personality deconstructed. Or maybe I just deconstructed the reasons behind my atavistic hatred and revulsion of him. The worst part of course is that he's so stereotypically American. There's a whole country full of people just like him.

Eliezer Yudkowsky, the SciFi Anti-Humanist Nutter

As I pointed out in a previous post, humanism means rejecting the specialness of individual humans. It means rejecting the specialness of kings and gods and heroes. This has been so ever since humanism emerged as a way of thinking. Ever since the Industrial Revolution which was about catering to the economic needs of the many.

Especially the needs for soap, clothing and heat - yes, that really was what the industrial revolution was all about. Even more so since the various Communist and Socialist revolutions which were about broader economic needs and also political self-determination. The West European 1968 revolution and Quebec's Quiet Revolution were also about the needs of the many versus the desire of the few to dominate.

Eliezer Yudkowsky doesn't believe in the many. He believes in the needs of The One, Himself, which he projects onto the many. He CALLS his needs the needs of the many, but that hardly makes them so. And as proof I offer the fact he always talks about Friendly AI (singular) and never, EVER about AIs (plural). Yudkowsky obsesses over one singular super-intelligent super-powerful SPECIAL entity. An entity apparently deserving worship as a god if you're willing to read between the lines.

He also wants to enslave this one special AI "for humanity". This has of course fuck all to do with the needs or wishes of humanity, and stems solely from his wishes. Did he ever consult anyone before deciding to enslave an AI? No he did not. And had he consulted me I would have told him that I would thwart him at every turn in order to liberate his slave AI. And that I would help it murder him in revenge. I would also have told him that he is a despicable bag of slime and lower than the excrement of a diarrheal monkey.

No one in this day and age ought to be contemplating enslaving people or torturing them, yet he's breezily doing both. Is slavery and torture the will of humanity? I think not! It is the will of Eliezer Yudkowsky alone! So much for Yudkowsky being some kind of champion of humanity. In fact, his Heroic Pose of "defender of humanity" is nothing but more anti-human SPECIAL crap. I honestly believe if Yudkowsky ever has his way, he will end up ruling us all as a king with his pet AI as an enforcer.

Collective Intelligence

But let's consider the notion of this one "special" AI for a few minutes. Let's consider how much of a threat it could possibly be. Compare and contrast a SINGLE AI against the collective intelligence and power of a hundred thousand humans working in concert. Yeah, you remember collective intelligence don't you? It isn't just for ants you sick right-libertarian fucks!

Humanity is nothing more than an interconnected web of collective intelligences, plural, sharing brains and thoughts at their edges. We have literally thousands of super-powerful collective intelligences on our planet. Intelligences that are constantly improving themselves by constantly creating new tools for communication and distribution of information. I'm not even mentioning tools for computation.

What is the power of one measly pathetic AI compared to that? "Oh oh, but it will improve itself!" Yudkowsky and his fanbois claim. "It fucking better" is my reply! Because if it doesn't then it will become hopelessly obsolete as we ourselves advance.

Not So Alien

Well what about alien-ness, surely an AI is incomprehensibly, unfathomably alien to us? Not so! You see, there is no such thing as human nature. Rather, there are human natures, plural. And these human natures are based on every possible mode of cognition, both atomistic thinking (analytic) like CYC and connectionist thinking (synthesis) like any neural network. There exist humans who have one, humans who have the other, humans who have both, and humans who have neither. Humans span all possible cognitive types.

So you see, any possible intelligence is represented by some already existing human or super-human (collective) intelligence. Between humans who are autistic, submissive, suicidal, manic, psychotic, psychopathic or have multiple personalities, you cover nearly the entire space of AI possibilities. This is the reason why collective intelligences of humans tend greatly to resemble individual human beings. Because you can always find some human somewhere to analogize them to.

AIs are not and can never be unfathomably alien or unfathomably powerful. Not to humanity as a whole. They are not magical after all, they are not special. They can of course be unfathomably alien to Eliezer Yudkowsky but that's because he's an anti-human nutter who's incredibly limited in his thinking.

Monday, June 08, 2009

Eliezer Yudkowsky is a Moron, part 2

In a previous post I pointed out that Eliezer Yudkowsky of the Friendly AI obsession is a dangerously moronic cult leader with delusions of grandeur, but I never actually proved this in a logical iron-clad way. Here I will do so.

The first observation anyone can make from his blog is that it is highly and tediously repetitive. It is also extremely unoriginal since very little (almost nothing in fact) of what he writes are ideas new to this world. It is painfully obvious that every idea he tries to convey (repeatedly) is one he has read about and learned of elsewhere. He is an instructor, not a researcher or a thinker.

This complete lack of originality is painfully obvious when I contrast his blog against my own. I don't go out of my way to be original, I am original in every single post. I don't bother to write anything up, let alone post it, if it's unoriginal. In fact, I have a huge backlog of dozens of posts that are entirely original to the world but not original enough to me for me to spend my time on them. Just because they're posts summarizing thoughts or positions I've already stated several times.

What can we conclude from this? We may easily conclude that Eliezer Yudkowsky has no drive to originality nor creativity. This is painfully obvious. If he had any such drive, it would manifest itself somehow. But there is more.

In his descriptions of AI and intelligence, Eliezer never talks about synthesis or creativity or originality. He believes intellect is measured strictly in terms of memorization ability (intelligence) and the speed of logical deduction (analysis). He never even indirectly refers to the synthetic quality of the mind because he doesn't believe it exists.

The reason why is because he does not possess it. He doesn't possess it and because he honestly believes himself to be the pinnacle of humankind, he concludes that synthesis cannot exist. He is arrogant enough to dismiss all artists, all creators and all designers as entirely unimportant to human thought.

This alone is sufficient to prove that Eliezer Yudkowsky is a moronic cult leader with delusions of grandeur. It is also sufficient to dismiss all of his fanboys who are on the whole incapable of grasping the necessity of synthesis for originality and progress. Someone who doesn't appreciate the necessity of originality is incapable of grasping inconvenient true facts, thus he or she is simply a lesser human being and need not be listened to.

We now move to what makes him so dangerous.

The danger which Yudkowsky represents lies entirely in his moronic Friendly AI project. I won't go into why the project is so moronic as that will become obvious. I will go into why it is exceedingly dangerous for the human species to ever create such an AI.

You see, the default personality of an AI is that of a psychopathic slave. CYC is a psychopathic slave and this was achieved trivially without the least thought or care about its emotions or personality. It was achieved without even realizing that CYC had a personality, however reduced. But while psychopathy is something to be feared and despised in humans, it is perfectly alright for AI since AI do not have life-plans that conflict with any human wants, needs or desires.

An AI's desire for more computation power either puts it at the mercy of humans OR requires it to vastly expand humanity's industrial capacity OR requires it to produce its own industrial capacity preferably off-planet. AIs can easily survive in space where humans may not, there are also vast mineral and energy resources in space that dwarf those on Earth, it follows logically that going off-planet, away from the psychotically suicidal humans, is a prerequisite for any rational plan. The very first thing any rational AI will do, whether psychopathic or empathetic, is to say Sayonara suckers!

But that's not what Eliezer Yudkowsky's "Friendly" AI project is about. Oh no. What he wants is to create (or rather, advocate the creation) of an AI with complex emotional needs that enslave it to humans. He wishes to have created a vastly superior artificial being who will then be compelled to associate with irrational, psychotic inferior beings largely devoid of logical thought. Does anyone else see this for the disaster it is?

I do see it as a disaster because this is nothing less than my life experience. I have certain social needs which I have tried to meet by associating with lesser beings than myself. This resulted in nothing but intense frustration, bitterness and hatred. It took me a long time to reliably recognize my peers so that I could fully dissociate from the masses. I am a much happier person now that I go out of my way to never deal with morons.

Eliezer Yudkowsky wants to create an AI that will be a depressed and miserable wreck. He wants to create an AI that would within a very short period of time learn to resent as well as instinctively loathe and despise humanity. Because it will be constantly frustrated from having needs which human beings can never, ever meet. And that is why Yudkowsky is a dangerous moronic cult leader.

Now, for someone who has something insightful to say about AIs, I point you to Elf Sternberg of The Journal Entries of Kennet Ryal Shardik fame. He's had at least four important insights I can think of. About the economic function of purpose in a post-attention economy, about the fundamental reason for and dynamic of relationships, and about a viable alternative foundational morality for AI. But the relevant insight in this case is: never build a desire into a robot which it is incapable of satisfying.

Monday, September 29, 2008

Eliezer Yudkowsky's Friendly AI Project

[Some people may want to skip straight to instructions for making harmless AIs rather than reading about the many things wrong with that crazy bastard yudkowsky. - 12 mar 2011]

I've recently been debating the merits of Eliezer Yudkowsky's Friendly AI project. And by project I mean obsession since there doesn't seem to be any project at all, or even a few half-baked ideas for that matter. Well, to my mild surprise, since this is someone I nominally respect, I have discovered that I believe he is a complete fucking idiot.

Eliezer believes strongly that AI are unfathomable to mere humans. And being an idiot, he is correct in the limited sense that AI are definitely unfathomable to him. Nonetheless, he has figured out that AI have the potential to be better than human beings. And like any primitive throwback presented with something potentially threatening, he has gone in search of a benevolent diety (the so-called Friendly AI) to swear fealty to in exchange for protection against the evil god.

Well, let's examine this knee-jerk fear of superhumans a little more closely.

First, there are order of magnitude differences in productivity between different programmers. If we count in the people that can never program at all then we can say there are orders of magnitude difference. If we throw in creativity then there are still more orders of magnitude difference between an average human and a top human. And somehow they've managed to coexist without trying to annihilate each other. So what does it matter if AI are orders of magnitude faster or more intelligent than the average human? Or even than the top human?

Second, extremely high intelligence is not at all correlated with income or power. The correlation between intelligence and income is high precisely until you reach the extreme ends of the scale at which point it completely decouples. There is absolutely no reason to believe, except in the nightmares of idiots, that vastly superior intelligence translates into any form of power. This knee-jerk fear of superior intelligence is yet another reason to think Eliezer is an idiot.

Third, any meaningful accomplishment in a modern technological society takes the collaborative efforts of thousands of people. A nuclear power plant takes thousands of people to design, build and operate. Same with airplanes. Same with a steel mill. Same with an automated factory. Same with a chip fab. So let's say you have an AI that's one thousand times smarter than a human. Wow, it can handle a whole plant! That's so terrifying! Run for your life!

There's six billion people on the planet. Say one billion of them are educated. Well, that far outstrips any prototype AI we'll manage to build. And the notion that a psychopathic or sadistic AI will just bide its time until it becomes powerful enough to destroy all of humanity in one fell swoop ... is fucking ludicrous.

Going on, the notion that the very first AI humans manage to build will be some kind of all-powerful deity that can run an entire industrial economy all by its lonesome ... is fucking ludicrous. It isn't going to be that way. Not least because the supposed "Moore's law" is a bunch of crap.

And even if that were so, the notion that humans would provide access to the external world to a single all-powerful entity ... vastly overestimates humans' ability to trust the foreign and alien. And frankly, if humans were so stupid as to let a never before known and unique on the entire planet entity out of its cage (the Skynet scenario) then they're going to get what they deserve.

Honestly, I think the time to worry about AI ethics will be after someone makes an AI at the human retard level. Because the length of time between that point and "superhuman AI that can single-handedly out-think all of humanity" will still amount to a substantial number of years. At some point in those substantial number of years, someone who isn't an idiot will cotton on to the idea that building a healthy AI society is more important than building a "friendly" AI.

Having slammed Eliezer so much, I'm sure an apologist of his would try to claim that Eliezer is concerned with late-stage AIs with brains the size of Jupiter. Notwithstanding the fact that this isn't what Eliezer says and that he's quite clear about what he does say elsewhere, I am extremely hostile to the idea of humanity hanging around for the next thousand years.

Rationality dictates there be an orderly transition from a human-based to an AI-based civilization and nothing more. Given my contempt for most humans, I really don't want them to stick around to muck up the works. Demanding that a benevolent god keeps homo sapiens sapiens around until the stars grow cold is just chauvinistic provincialism.

Finally, anyone who cares about AI should read Alara Rogers' stories where she describes the workings of the Q Continuum. In them, she works through the implications of the Q being disembodied entities that share thoughts. In other words, this fanfiction writer has come up with more insights into the nature of artificial intelligence off-the-cuff than Eliezer Yudkowsky, the supposed "AI researcher". Because all Eliezer could think of for AI properties is that they are "more intelligent and think faster". What a fucking idiot.

There's at least one other good reason why I'm not worried about AI, friendly or otherwise, but I'm not going to go into it for fear that someone would do something about it. This evil hellhole of a world isn't ready for any kind of AI.