There is an obvious problem with Utilitarian moralities. I don't know if Utilitarians see it as a problem because they're pretty dumb. They just may be dumb enough to see it as a plus. Well I will here explain the problem and prove that it is indeed a problem. An unavoidable, irrefutable and fatal problem. The problem is 'morality inversions by weight of numbers'.
What it boils down to is that you have some mechanism to multiply the small positive benefits of an evil act while putatively avoiding the multiplication of the enormous costs. Say for example you videotape a real live torture, rape, snuff film. For the next century, millions of sadists are going to be able to enjoy the experience vicariously while anyone disturbed by the event will simply avoid it. All it will cost is a single life. Intuitively this is immoral. For idiots like Eliezer Yudkowsky it is possibly, probably, obviously moral.
There are many problems with this particular morality inversion. Firstly, morality is an abstract hypothetical system, not a concrete calculation. Treating it as a concrete calculation, as morons such as Yudkowsky do, is wrong from the get go and will only result in wrong answers.
Secondly, morality is an ought and oughts are second derivatives of wants, they are what we WANT TO WANT. And we don't want a world in which snuff (at least the non-consensual kind since there was an interesting court case of consensual cannibalism in Germany a year or so ago) is considered moral. We don't want this and we don't want to want this. It's an obscenity. As a result, snuff can't be how the world ought to be, so it can't be moral. Obscenities generally can't be moral, that's what it means to be an obscenity.
Thirdly, and most grievously, the concept of a person is rather ill-defined for an AI or any society that includes people who can temporarily bifurcate (copy themselves and then merge back their memories). How many votes do you get if you clone yourself 20 times? In such societies, only moral systems that are completely independent of weight of numbers can produce well-defined decisions. And since such societies are our future, it behooves any future-oriented person to toss Utilitarianism by the wayside.
And that's not even the biggest problem with Utilitarianism since the whole concept of 'utility' is ill-defined.
The upshot of all this is that morality inversions are not "cool" or "deep" or a sign of "overcoming bias". They are WRONG. Persecution of minorities doesn't become a good idea just because the minority is small enough and the majority wants to do it badly enough. That would be absurd. That would be ANTI-morality. And anyone who sets aside these numerous deep flaws in order to appear elite or philosophical is just a blatant idiot. A poseur, not a philosopher.
This makes it the third fundamental property which any moral system must have in order to be coherent and well-defined. The first two being consistency across actors (different people applying the same moral system can't disagree on whether an act is moral or immoral), and consistency across order of application (the same outcome must result regardless of who acts to apply morality first).
Transit Project Openings in 2024: A Global Review
10 months ago
2 comments:
>Firstly, morality is an abstract hypothetical system, not a concrete calculation. Treating it as a concrete calculation, as morons such as Yudkowsky do, is wrong from the get go and will only result in wrong answers.
This argument is essentially content-free. What exactly is an "abstract hypothetical system"? Is arithmetic an "abstract hypothetical system"? If not why not?
The difference between you and Yudkowsky is that Yudkowsky evaluates events on a fine-grained level (1 person feeling y pain + a million people feeling x pleasure = net gain) and you evaluate on a coarse-grained level (1 thing that smells like an abomination to Richard Kulisz = net loss).
>morality is an ought and oughts are second derivatives of wants
Second derivative of a want = want to want to want. I think you just mean derivative. But you'll probably be too stupidly defensive to admit your error even in this minor case.
>Thirdly, and most grievously, the concept of a person is rather ill-defined for an AI or any society that includes people who can temporarily bifurcate (copy themselves and then merge back their memories). How many votes do you get if you clone yourself 20 times? In such societies, only moral systems that are completely independent of weight of numbers can produce well-defined decisions. And since such societies are our future, it behooves any future-oriented person to toss Utilitarianism by the wayside.
Are you suggesting that killing 20 people with identical DNA is morally identical to killing 1 person with unique DNA?
Your objections to utility functions is essentially that it's hard to tell whether you like one thing more than another. That's like objecting to the concept of length because none of the rulers available are precise enough to tell which of two sticks you found are longer.
>Yudkowsky evaluates events on a fine-grained level
You've read the linked article (one of Eliezer's classics), right? It's more like "1 person feeling infinite pain vs. infinite people feeling epsilon pain". Not very fine-grained.
Post a Comment