Monday, December 27, 2010

Eliezer Yudkowsky the Utilitarian Idiot

Not only is Utilitarianism absurd since the notion of a global linear aggregation of non-existent "functions" each person is supposed to have (but doesn't) is impossible. Let's skip the known theorem in public choice theory that proves this and go straight to a counter-example.

You have 3 AIs, two of which prefer A over B and the third prefers B over A. Assuming A and B are totally arbitrary things of no moral significance, utilitarianism predicts A should be chosen over B. At least until the third AI rewrites its own preferences so that they are all amplified 10-fold. Now that B's value is arbitrarily and artificially amplified, the third AI gets its way.

How? Just because the third AI really, REALLY wants B over A. No other reason than that. Apparently what a tiny minority really REALLY want should hold sway over the rest of the population if they just want it badly enough. What kind of fucked up logic is that? Apparently, if someone is clinically depressed and they don't care if they live or die then suddenly it's okay to kill them to make 100$ off an insurance scam? This is utilitarian "logic".

Eliezer

Utilitarianism is completely, utterly, totally and thoroughly amoral. It is repugnant in the extreme. And ... Eliezer Yudkowsky subscribes to it. Because he is a thoroughly amoral dirt-bag.

I don't read Yudkowsky's blog but I do read his fiction. In one of the latest chapters of Methods of Rationality, HP describes an experiment where some psychologists tried to determine the value of saving 20 vs 2000 birds from an oil slick, and it all turned out to be the same.

Eliezer the Utilitarian numb-nut (since HP in that story is just a stand-in for Eliezer) calls this a "cognitive bias" as if there's something wrong with human brains because they don't reach his expected Utilitarian conclusion that saving 2 birds is worth twice as much as saving one bird.

There is absolutely nothing wrong with it! The only thing wrong here is with Eliezer's bogus notion that he is the ultimate arbiter of everything. And that EVERY time human brains don't work the way he expects they should, it's because they're defective.

Transfinites

The truth is that morality works based on transfinite numbers, not on finite numbers. Just by switching to transfinite numbers you solve most of the problems with Utilitarianism. Of course, you do that by utterly destroying the underpinnings of Utilitarianism because now you can no longer make any kind of decisions about whether A or B is the moral outcome since they're too similar to each other. (This is called Free Will and it is notable that Eliezer doesn't like it.)

But in the case of saving birds from oil slicks, it becomes easy to see why they could have constant value regardless of the number of birds. After all, people use money to feed themselves, feed their children, provide housing, provide all the other necessities of participating in a highly technological democratic society (like internet access), and then there's life's little luxuries. For bourgeois middle-classers, saving birds from oil slicks is in there somewhere among life's luxuries.

First and most importantly, money allocated to saving some dumb fucking birds will never displace one cent from feeding or clothing your family nor ANY other necessity. Secondly, whatever sum is assigned to saving birds is pretty arbitrary and not directly comparable to the sums assigned to any other luxuries. Because you're using transfinite numbers and you can't say that two items in the same class have more or less value than each other.

The only thing that determines the amount given over to saving birds is that it be enough to be representative of the class. 80$ is what middle-class people might assign to a luxury they care deeply about, and so that's how much is going to go to it. No more, no less.

How people's brains are actually wired to process morality? Makes total fucking sense.

Eliezer's Yudkowsky's bogus "insights" into pseudo-morality? Absolute fucking nonsense.

Narcissistic Smeghead

Yudkowsky claims to be intelligent. Obviously he's an idiot. He also claims to be "overcoming bias", yet his biggest bias is an ego the size of Jupiter. Maybe if he didn't have that giant fucking ego, he wouldn't have named his websites those pretentious names that put down everyone else by comparison. Maybe if he were half as smart as he claims to be he would have realized that using a put down as your domain name is a dead giveaway.

And maybe if he actually cared about other human beings, he would have figured out real morality and not this sick twisted nauseating parody that stupid rich white Californian adolescents with feelings of entitlement get hung up on. And maybe if America weren't a haven of narcissists with an allergy for morality, they wouldn't have created this pseudo-intellectual crap for witless children to get hung up on in the first place.

You know, speaking about Americans makes me wonder whether Yudkowsky is a narcissist. His building a cult in his name is certainly an indicator. I wonder because my biggest worry here is that he enjoys my hatred. I would much rather shatter him emotionally. I would quite willingly sacrifice any forthcoming chapters on Methods of Rationality in return for some assurances that he will never, ever proselytize his parody of morality ever again. I would say the same for assurances he won't enslave an AI but I think he's too stupid to manage it.

I have a remarkably low opinion of AI researchers. I have an even lower opinion of anyone who thinks there can't POSSIBLY be any flaws in his reasoning since he's the pinnacle of humanity. You know what? There is no fucking way that Eliezer Yudkowsky isn't a narcissist. That pinnacle of humanity crap is totally narcissistic.

It's not thinking one is the pinnacle of humanity that's narcissistic. It's not even saying it. What it is is saying it in a way that invites agreement, that invites worship and adulation and followers. When I used to say that kind of crap, my tone was always full of wrath and hatred. I was always sending the message "why can't you be better than you are, why can't you better yourself and be of use to people you contemptible ball of worthless slime". When Eliezer says it, he's smiling like Gilderoy Lockhart and saying "look at me, look at me, and worship me".

Well, that's another chink of that smeghead's repulsive personality deconstructed. Or maybe I just deconstructed the reasons behind my atavistic hatred and revulsion of him. The worst part of course is that he's so stereotypically American. There's a whole country full of people just like him.

35 comments:

peter woo said...

... it's not clear how the transfinite numbers are being used here. Can you explain?

Richard Kulisz said...

It's not clear to me what there is to explain. Let's say the moral value of either A or B is aleph_15, then the third AI multiplying B by 10 is 10*aleph_15 = aleph_15. And 20 billion AIs wanting A means 20 billion * aleph_15 = aleph_15. If A and B are morally equivalent then there is no possible moral basis to make a decision between them.

Yudkowsky's mistake is to try to substitute convention (democracy in this case) for morality. In his narcissistic hubris, he's blind to the existence of morality since he doesn't possess it. But morality, ethics, necessity and convention are all independent things even if together they make up justice. Just as politeness, empathy, and altruism are independent too.

Just so I'm clear here, morality says NOTHING about whether to spend money to save birds from oil slicks or on a new television. They are morally equivalent choices.

Pete said...

I have known about Eliezer since 2004, and this is the first time I've seen someone call him an idiot. Weird. I think he is very much a non-idiot.
Also, I noticed you have previous entries in which you call him such, but I wanted to respond to the more recent one.

Richard Kulisz said...

For all of Yudkowsky's high intelligence and supposed concern about empirical reality, he is incapable of learning any of the following empirical facts:

1) that he is a narcissist, thus inferior to the bulk of humanity
2) that his perception of himself (as a super-man) is grossly skewed
3) that his perception of humanity (as randomly irrational) is grossly skewed
4) that his perception of his relation of himself to humanity (that all must bow to the super-man) is grossly skewed
5) that there is something special in the brains of artists and designers (synthesis) which he does not share
6) that morality means caring for the group's welfare and wishes intrinsically, and that as a narcissist he is utterly incapable of it

Yudkowsky has faith that he is a perfect being. And because of that, he is incapable of learning any imperfections of himself. And someone incapable of learning is, by definition, an idiot.

Brian said...

"Not only is Utilitarianism absurd since the notion of a global linear aggregation of non-existent "functions" each person is supposed to have (but doesn't) is impossible. Let's skip the known theorem in public choice theory that proves this and go straight to a counter-example."

I don't know what the theorem is, but I have always assumed utilitarians could and do use non-linear aggregations. Does the theorem disprove those, and what is it?

Also, are utilitarians defined as those who, among other things, use linear aggregations? If so, then utilitarianism might be disproved without infringing on nearly identical systems, which wouldn't lose intellectual importance even if they are comparatively unpopular.

As for the counter-example, I think it doesn't work. Perhaps the ideal utilitarian answer to the question "what is right when two robots have weak preference A and one has a very strong preference for B?" is that it depends, and the hypothetical doesn't provide enough information to answer. Specifically, you would have to know the prior situation.

Of course, it's not really that what is right to do in a state "S2" depends on which of several possible states, "S1a" or "S1b" preceded it. The previous state would only count insofar as it is manifested in the world by being remembered, thereby distinguishing a "S2a" from a "S2b".

Richard Kulisz said...

Brian, you are wrong in so many fundamental ways, it's ridiculous.

First of all, responding to your assertion that Utilitarianism would answer "it depends" to the scenarios I posed above, you are so wrong.

The whole POINT of utilitarianism is that in such a totally abstract situation you CAN tell the outcome. Utilitarianism is ALL ABOUT claiming you can have meaningful answers when given a scenario of:

the world consists of three agents with following "utility functions". two agents have a (1,0) "utility function" for (A,B) while one agent's "utility function" is (0,10). There is no option C, A and B are mutually exclusive.

I've proven there can be no solution to such a situation and that the imputation there could ever be is more than absurd, more than nonsense, but *ludicrous*. Mission accomplished.

You simply don't get to waffle on and claim that Utilitarianism isn't what Utilitarianism is, that it's something different that would somehow, magically, not fail over the most trivial reductio ad absurdum arguments.

You remind me of modern defenders of behaviourism who try to rewrite past history, claiming behaviourism never asserted that human beings had no minds, no expectations, no anticipation, no higher emotions, no internal states, no consciousness, and nothing that makes the human mind human.

Basically, you're acting in an intellectually dishonest way in order to rewrite history. Because your side lost, lost big, was totally crushed, and now you're trying to claim you never wanted the territory you were holding in the first place, you REALLY wanted the territory you're retreating to.

Nor is behaviourism the first place you can encounter such intellectual dishonesty. Yours is the God Of The Gaps approach common among religionists. "Let's ignore that for 2 millenia science was a religious enterprise until science turned virulently atheist and conquered all the territory once held by religion. No, let's ignore 2000 years of history and say there is no conflict between science and religion." Because preaching peace is the right approach when you're the LOSER.

Richard Kulisz said...

And I made the god of the gaps accusation before checking out your blog. Now continuing my scheduled destruction of your position.

With respect to your claim that non-linear aggregations provide a solution, that also is just not on.

We start with a solution space for "possible aggregation formulae" and I note that this solution space is remarkably flat so that no one solution stands out from it. Instead, because this space is flat, picking a single formula from it is an exercise in arbitrariness and is just not on.

Now you come along and YOU say the solution space is EVEN BIGGER than I stated, that it's much MUCH bigger in fact. That it contains all kinds of formulae which are themselves MUCH MORE COMPLICATED than simple linear formulae. And so are even LESS obvious as standouts. And I'm scratching my head and I want to yell HOW THE FUCK DOES THIS HELP YOU?!

Because it doesn't help you. At all. You're shooting yourself in the foot. That's your brilliant tactic. You've made your solution space LARGER and MORE complicated, with even WORSE solutions than the infinitely many solutions I dismissed on the grounds there were infinitely many of them.

Oh but wait, here's where the magic comes in. You didn't just expand the solution space with what at first and second glance are WORSE solutions. You also waved your hand and started arguing FROM IGNORANCE, saying "WE DON'T KNOW there is a standout solution in this larger solution space of crappier solutions, therefore logically THERE IS such a solution".

We don't know there is a god, therefore there is a god? Is that really what you want to be arguing with me? Or ever, to anyone?

You want to rescue utilitarianism with a non-linear solution? Fine. Then produce a standout non-linear solution from within that space. One that is obviously much better than all the rest. And if you can't do that then produce some halfway convincing arguments why more complex non-linear formulae are less arbitrary than simpler linear formulae. AND a very convincing argument showing how the outcomes of non-linear formulae can't be trivially rigged by linear fiddling of an agent's desires.

Non-linearity isn't a solution, it's a problem. So now instead of the ONE problem I originally posed in this blog post, you have TWO problems. And the original of those two problems is still that math, ANY kind of math, will never work at resolving values / preferences / desires. A fact you tacitly acknowledged by saying numbers "doesn't provide enough information to answer".

Brian said...

"The whole POINT...is ALL ABOUT claiming you can have meaningful answers when given a scenario..."

I'm pretty sure Utilitarianism (and the thing that resembles it but is merely my charitable reading, so no one [including me] might actually agree with it) allow for multiple solutions to a problem. As long as you definitively show, e.g., that the health(+all other variables) value in utilitons of single payer plan X is the same as the health+freedom(+remaining variables) value in utilitons of private insurance market healthcare, you could theoretically be indifferent between them.

It's also meaningful to say "Hey, you forgot the variable of different costs to the legal system. Suspend judgment until we have taken care of all the variables." Your "word problem" hypo plainly adds a variable not present in the formal proofs. It's possible that a world created with rational valuing entities (2,0) and (0,10) is different than one created with rational agents (2,0) and (0,1) in which they are capable of modification, and the latter in fact modifies. You're inserting another variable but you didn't present evidence that changing things doesn't, well, change things in the way I think it may.

Basically, you're acting in an intellectually dishonest way in order to rewrite history."

For me to consider a category of idea defeated, I have to consider the strongest representative of that category, even if it something I construct like Frankenstein's monster from the pieces of a typical believer's argument. Usually, I won't be entirely sure exactly what someone means, so I just make sure I can defeat the strongest of any type they say, like a proof round in a gun. It's not hard to defeat sophisticated behaviorism or theology so it's certainly nothing to get worked up about.

"You've made your solution space LARGER and MORE complicated, with even WORSE solutions than the infinitely many solutions I dismissed on the grounds there were infinitely many of them."

You're over-personalizing this. That was actually the correct set to use all along.

In any case, I don't see why I need to find it so long as I can know that it exists and can approximate it.

"You also waved your hand and started arguing FROM IGNORANCE, saying "WE DON'T KNOW there is a standout solution in this larger solution space of crappier solutions, therefore logically THERE IS such a solution"."

Not really. You're trying to prove a set of theories wrong, so you have the burden of proof. You've only searched a small area of answer space, failed to find a function, and claimed that it doesn't exist based on the exhaustiveness of your search.

"One that is obviously much better than all the rest."

The word "better" is surprisingly unspecific or empty when you think about it.

What I can say about a non-linear function is that one would seem to match human intuition a lot better than a linear one would. To the extent one might construct a moral theory to be mostly out of people's existing values and divergent from that to avoid irrationalities and dutch books, a theory resembling classical utilitarianism would work well. It needs specific answers, like non-linear aggregation, where either I don't know utilitarianism's answer or it doesn't have just one.

As a point of fact I haven't actually said anything amounting to that there is a solution, despite your assertion, merely pointed out that your attempted disproof ignores an infinite amount of relevant cases. But if the goal is to create something resembling both a human values system and a reasonable logic machine that does not make any of a certain number of errors, something that is a compromise between those two systems but better, then it seems that almost by definition it can be done. ("Almost" because it's possible that there is a convex Pareto frontier.)

Alrenous said...

"Apparently what a tiny minority really REALLY want should hold sway over the rest of the population if they just want it badly enough."

Contradiction.

"Assuming A and B are totally arbitrary things of no moral significance,"

Wanting power is morally significant.

Not that I believe in utilitarianism.

Richard Kulisz said...

Ugh. You're conflating the level and the meta-level.

I doubt you understand my rejoinder but I don't really care to explain your mistake to you.

Alrenous said...

That's fine. I just needed to test my prediction that you're an unserious intellectual.

Alrenous said...

I will consider your positions.

You won't consider mine.

QED.


You don't have any obligation to take me seriously. What, are you projecting onto me the obligation to take YOU seriously?

Rather, it is disturbingly odd for you (as many do) to respond to someone you don't take seriously.

You don't get to insult me and then tell me not respond.

Moreover, it's tactically unwise. " + Don't talk to me" == "You can get revenge for simply by not shutting up."

Luckily, I see revenge as petty. So don't worry.

In any case, you've successfully convinced me that your mind is closed and thus attempting communication is pointless.

This comment is just another test, really. An idle one, though.

Alrenous said...

Well, can't resist trying this too:

"Power is the only thing you care for."

A society run by me would allow yours as a microcosm.

A society run by your wouldn't allow mine.

You think I'm power hungry.

The evidence shows...

Richard Kulisz said...

I've heard that lying microcosm crap before. It's typical right-libertarian garbage, which just illustrates why your kind hasn't contributed anything to political science or intelligent political discourse.

That microcosm crap goes the other way you know. One of the universal human rights is for private property. Does that suddenly mean that communism includes your vision of an ultra-fascist future as a microcosm? Hardly. Hardly!

Richard Kulisz said...

Oh and for anyone who cares, the PROOF that it's lying crap is simple.

In a world where right-libertarian garbage lived side by side with anarcho-communists, if the right-libertarians tried to enslave ANYONE ANYWHERE this would be deemed a violation of universal human rights. The right-libertarians could engage in all the mutually consensual BDSM games they wanted but the moment they tried to enforce their slavery "contract" against an unwilling person, this so-called "contract" would be ruled null and void by the anarcho-communists. And if the ultra-fascists tried to enforce it, the anarcho-communists would go to war to protect the victim whose human rights had been trampled underfoot.

In a world where anarcho-communists have genuinely equal status, right-libertarians' dreams of slave-ownership can never be fulfilled. Because if the anarcho-communists AREN'T allowed to impose universal human rights on everyone who wants them and asks for them (at any time and for any reason) then it is a LIE that anarcho-communists have equal status. And this is what reveals them and their "microcosm" crap to be the lying delusional ultra-fascistic crap it really is.

The left-libertarians want universal freedom. The right-libertarians want universal slavery. Thus they are really fascists. This is what had to be demonstrated. End of story.

Now STFU you lying asswipe. Any further comments from you will be summarily erased since you've maneuvered me into repeating myself THREE TIMES now.

Alrenous said...

I enjoyed the bits about colonization with nanotech/AI. And when you reminded me that planets are inefficient.

I will be stealing those ideas, as they're mainly correct.

This is why I'm a serious intellectual, and you aren't. I can learn from you, despite your best efforts. You can't learn from me, regardless of any efforts I might care to make. To first order approximation, I know everything you know in addition to everything I know, and you only know what you know.

since you've maneuvered

Yes, you're easy for me to manoeuvre. It would help to start being a serious intellectual. Then you could stop people toying with you as easily as I have just done.

It's good you've realized the only way to stop me is to not read me. That is the only defence the ignorant have against the wise, should the wise not be perfectly benevolent.

Well, at least you react when I poke you. More than most people can say.

One time this was even literal. I poked a guy in the back with a pen and he didn't notice. My benchmate and I ended up making a game of it all term.


By the way, what are your insults supposed to accomplish?


"Stop talking to me you [noise]."

As previously mentioned, insults combined with requests are tactically unwise.


"I've heard that lying microcosm crap before."

So if it wasn't lying, I would have said something that should convince you.

I wasn't lying. Oops.

"which just illustrates why your kind hasn't contributed anything to political science or intelligent political discourse."

If it wasn't garbage because it was a lie, it would be a worthwhile contribution. Otherwise, you could have attacked the logic, instead of slinging slights.

It wasn't a lie. Oops.

Alrenous said...
This comment has been removed by a blog administrator.
Alrenous said...
This comment has been removed by a blog administrator.
Alrenous said...
This comment has been removed by a blog administrator.
Richard Kulisz said...

Actually, given how much and how blatantly he lies, I'm starting to think he's a psychopath. In either case, I certainly do not want to deal with this stalker.

jimf said...

> It's not thinking one is the pinnacle of humanity that's
> narcissistic. It's not even saying it. What it is is saying
> it in a way that invites agreement, that invites worship
> and adulation and followers.

Sam Vaknin, "Facilitating Narcissism"
http://samvak.tripod.com/narcissistoperator.html

Narcissists are aided, abetted and facilitated by four types
of people and institutions: the adulators, the blissfully ignorant,
the self-deceiving and those deceived by the narcissist.

The adulators are fully aware of the nefarious and damaging
aspects of the narcissist's behavior but believe that they are
more than balanced by the benefits - to themselves, to their
collective, or to society at large. They engage in an explicit
trade-off between some of their principles and values - and
their personal profit, or the greater good.

They seek to help the narcissist, promote his agenda,
shield him from harm, connect him with like-minded people,
do his chores for him and, in general, create the conditions
and the environment for his success. This kind of alliance
is especially prevalent in political parties, the government,
multinational, religious organizations and other hierarchical
collectives.

The blissfully ignorant are simply unaware of the "bad sides"
of the narcissist- and make sure they remain so. They look
the other way, or pretend that the narcissist's behavior is
normative, or turn a blind eye to his egregious misbehavior.
They are classic deniers of reality. Some of them maintain
a generally rosy outlook premised on the inbred benevolence
of Mankind. Others simply cannot tolerate dissonance
and discord. They prefer to live in a fantastic world where
everything is harmonious and smooth and evil is banished.
They react with rage to any information to the contrary
and block it out instantly. This type of denial is well
evidenced in dysfunctional families.

The self-deceivers are fully aware of the narcissist's
transgressions and malice, his indifference, exploitativeness,
lack of empathy, and rampant grandiosity - but they
prefer to displace the causes, or the effects of such
misconduct. They attribute it to externalities ("a rough patch"),
or judge it to be temporary. They even go as far as accusing
the victim for the narcissist's lapses, or for defending
themselves ("she provoked him").

In a feat of cognitive dissonance, they deny any
connection between the acts of the narcissist and
their consequences ("his wife abandoned him because
she was promiscuous, not because of anything he
did to her"). They are swayed by the narcissist's
undeniable charm, intelligence, or attractiveness.
But the narcissist needs not invest resources in
converting them to his cause - he does not deceive
them. They are self-propelled into the abyss that is
narcissism. The Inverted Narcissist, for instance,
is a self-deceiver ( http://samvak.tripod.com/faq66.html )

The deceived are people - or institutions, or collectives -
deliberately taken for a premeditated ride by the narcissist.
He feeds them false information, manipulates their
judgment, proffers plausible scenarios to account for
his indiscretions, soils the opposition, charms them,
appeals to their reason, or to their emotions, and
promises the moon.

Again, the narcissist's incontrovertible powers of
persuasion and his impressive personality play
a part in this predatory ritual. The deceived are
especially hard to deprogram. They are often
themselves encumbered with narcissistic traits
and find it impossible to admit a mistake, or to
atone. They are likely to stay on with the narcissist
to his - and their - bitter end.

Regrettably, the narcissist rarely pays the price
for his offenses. His victims pick up the tab.
But even here the malignant optimism of the
abused never ceases to amaze. . ."

Richard Kulisz said...

Anyone who cares to think about it will realize that Yudkowsky's writing of entertaining fiction doesn't outweigh his lying support of anti-human positions like utilitarianism and AI slavery.

> They engage in an explicit trade-off between some of their principles and values - and their personal profit, or the greater good.

I can imagine little more reprehensible and despicable than this. To have principles and yet to consciously set them aside!

Actually, that's a perennial issue in human evolution because being hypocritical about principles is more psychologically advanced than being honestly barbaric.

But since it's a toss-up which repulses me more and hypocrisy actually is more advanced, I figure it can only be a toss-up if I despise hypocrisy more.

Which makes sense since none of my values concern being more advanced. Progress is about forward momentum, not being ahead. On the other hand, I DO value truth.

Richard Kulisz said...

Thank you for the fascinating insight into psychological dynamics I am entirely unfamiliar with. Because narcissists repel me and their followers repel me at least as much.

Anonymous said...

You made the point that people tend to be willing to spend the same amount of money for different numbers of birds, etc, because the factors that influenced their decision to donate were not really related to their perceptions of morality in general. This was insightful, and true. People donate to causes for the purpose of social signalling, feeling good about themselves, etc. However, on an abstract level, if something is a moral good, is it too crazy to posit that twice as much of it is twice as good, or at least more good with the possibility of diminishing returns as you get more and more of this theoretical good construct? The penguins (or whatever) were just a stand in (and not a good one, as you noticed), but the point remains that people don't act proportionally on their moral issues of choice. This is a cognitive bias, by any other name.

Richard Kulisz said...

> However, on an abstract level, if something is a moral good, is it too crazy to posit that twice as much of it is twice as good

I already answered this question you worthless contemptible fuck. The answer is YES, yes it really is crazy to "posit" this. Which is why nobody but crazy people do so.

Richard Kulisz said...

There is nothing more despicable than someone who "learns" the exact opposite of what you tell them. Which exact opposite happens to oh so conveniently be exactly what they want to believe, and exactly what they started out believing in the first place.

Richard Kulisz said...

There is precious little more despicable than the notion that economics has fuck-all to do with morality. Or that right-wingers, especially right-libertarian fucks, have the slightest grasp on morality.

The whole notion of a "moral good" is a despicable oxymoron. Morality does not and never will align with economics. It is an overriding system with its own axioms and principles alien to economics. And that's why right-wing libers hate morality.

Moral calculations are not economic calculations. Moral concepts are not economic concepts. Moral axioms are not economic axioms. And there is absolutely no way to harmonize these two radically different systems. Though resolving the conflict is trivial since morality trumps economics.

Anonymous said...

I know this is a little dated, but I just wanted to make sure you knew that Utilitarianism wasn't a theory that Eliezer Yudkowsky came up with. There have been lengthy and relatively important debates between Consequentialist and Deontological ethics for the last hundred years or so, with many reasonably intelligent philosophical scholars defending consequentialism, such as Peter Singer and Peter Railton. Not that this is definite proof it's a good theory; I don't hold it, but the way you wrote your article, you seemed to think that utilitarianism was an obvious write-off; whereas the scholars that are working on ethics don't see it as so clear a choice as you do.

Richard Kulisz said...

Consequentialism vs Deontology has FUCK ALL to do with Utilitarianism.

The "scholars" you talk about, I despise as strictly beneath me. I have done more and better work in moral theory than those worthless fuckers.

I have a blog post shredding John Rawls' bloated A Theory of Justice as worthless right-wing propaganda trying to arrive at communist conclusions without communist axioms. The adoption of which (communist axioms) would have reduced his 500 page piece of crap to 3 pages.

But then again, that's the difference between math and the historico-linguistic circle-jerking philosophers engage in because they're incapable of either logic or math!

Richard Kulisz said...

You should then read my blog post about big bang cosmology being creationist nonsense. Only eternal chaotic inflation (and now the ekpyrotic universe) make any sense at all, either probabilistically or epistemologically. Inflation theory did not modify or rescue big bang theory, it annihilated it. And it only proves how idiotically incapable of judgement physicists are that they don't see this.

Misophile said...

The whole FAI project is a reductio ad absurdum of utilitarianism. Apparently, if a utility maximizer was actually instantiated the whole world would be in grave danger. That is, unless you solve some very tricky problems which Yudkowsky & Co. have had roughly 0 progress on. So that's, what, the 757th absurdity utilitarianism necessitates? It's a bit funny how the initial appeal of — let's call it — "real-number utilitarianism" is its simplicity and intuitiveness, but when it leads to ridiculous complexity and violations of intuition it's every bit as justified. "Biting the bullet," they call it.

Richard Kulisz said...

Good argument. So it's equivocation, I'm not surprised.

For myself, the appeal of transfinite valuation (I refuse to call it utility) is that this is *actually* how the human mind works.

I can even prove it empirically. Though the proof takes a half-dozen pages and publishing it indiscriminately violates the prescription of the Orange Catholic Bible.

Not that I actually care about the survival of humanity per se.

Misophile said...

>For myself, the appeal of transfinite valuation (I refuse to call it utility) is that this is *actually* how the human mind works.
And for myself, the repugnance of "utility" is how is much it DOESN'T model my mind or the mind of anyone whose company I'd ever keep. That and the insanity of a utility-maximizing society. There's no concept of justice, no concept of order (consistency, predictability), and a SEVERELY lacking concept of person. At best it "works" in hindsight, with enough contortion and crossing of fingers. The comorbid mental deficiencies are another clue: do they ever consider that THEY could be the fat guy going about his day before being tossed in front of a trolley? Do they consider tossing themselves? Ah, one can dream. (I couldn't resist :))

>Not that I actually care about the survival of humanity per se.
Right. But when you describe yourself as a utilitarian and all the while your life goal is to keep humanity safe from a hypothetical COMPETENT utilitarian, some alarms should go off. Regardless of how justified that goal is. But in the end, it's utility for me, not for thee, eh?

Note that these *aliens* want to lecture the rest of us about "human values"! ... Ever seen that phrase in Eliezer's corner of the internet?

Last thought: If the SIAI branch of AI theorizing is going to get anywhere, it'll be atop the ashes of their particular expected utility, decision theory, game theory, Bayesian probability theory edifice. Or it'll end somewhere awful, but they don't look capable enough for that.

Sorry for the late reply.

Anonymous said...

I agree with every point you make and disagree with all your proofs. I recommend focusing more on the big picture and less on why you think that's so. Nevertheless, as I say, I don't disagree with a single conclusion you made.

William Wyatt said...

I think the main thing I can say here is a quote I quite like:
_"If you can't go after a person's ideas, go after the person."_
I think that is too much of everything you've written here.