Where to start? I could go into how Rawls makes a big mistake in what the people would decide when he claims that 'freedom of religion' would end up a human right. It can't for elementary reasons and they wouldn't anyways because many atheists wouldn't stand for it. That's his liberal prejudices at work, much like Kant's Christian prejudices led him to claim that suicide was immoral. I could even go into how Rawls makes a fundamental, and monumental, mistake in his definition of a 'minimal being' since he describes a database without any motivation (ie, not an agent). But that would be trifling with details.
The big problem with Rawls' A Theory Of Justice is this ... it's impenetrable. If you can pick up a programming language specification and inhale it all in one sitting, then yeah you'll do fine. Otherwise? Forget it. Rawls' actual theory could be written up in 40 pages but his book has 500 pages. Of course its impenetrability is common to all philosophy texts. They're all padded with historico-linguistic shite because philosophers confuse the history and the terminology of their field with its actual subject matter.
This is similar to how physicists screw up physics education by mashing mathematics and history into actual physics. So anyone who's tried to learn about quantum physics will understand what's missing in A Theory Of Justice by comparing mainstream physics textbooks with Scott Aaronson's essays on the subject. Essays like Quantum Computing for High School Students and PHYS771 Lecture 9: Quantum.
But this impenetrability serves to conceal a much deeper and intractable problem with the book. A Theory Of Justice is nothing but a work of propaganda targeted at moral philosophers. It's written in such a way as to ensure that the smarter you are, the less chance there is you'll call bullshit on it. It hammers you over the head with references and tires you out with long-winded explanations. But it's bullshit. Not the "conclusions" Rawls comes to, the universal human rights regime predated Rawls by several decades, but every single step used to get there! A Theory Of Justice fails in its goal and is particularly annoying while doing so.
Why is it annoying? Well let's start with Rawls' attempt to pull a Galileo. He leads the reader down a winding road constantly saying "you know this, you know this, you've always known this" and you end up in a completely different place from where you started and the truth is *no* you did not know any of this shite which is the exact opposite of everything you ever believed. Propaganda can be convincing without being logical but when it aspires to be a work of philosophy then that's fairly damning. Manipulation is, or at least should be, a big no-no in philosophy.
What Rawls Should Have Done
Rawls starts with an individual(istic) human and tries to persuade and cajole this person to accept a collective perspective. But this is utter nonsense and the reason why is because morality is collective by definition. So there, just short-circuit all of that nonsense and start off from the collective viewpoint like any other theorem in mathematics.
Definition $1.1.1.1:
MORALITY: blah blah collective blah blah
Because all you have to do to justify that particular definition of morality is to contrast it with a very similar definition of ethics. The only significant difference between them is that ethics is individualist and morality is collectivist. Ethics defines a being's (individual or group) relation to other beings that are fundamentally different from it. Morality defines the INTRA-relations of a group of beings who are fundamentally similar, since they're all part of the same group. So all you have to do is set up two definitions, compare and contrast, and bingo you've got collectivity. By definition. And that takes care of a good couple chapters of Rawls' book.
Once you're at this collective point, by simple definition, all you have to do is go 'lo and observe', this is freaking mathematics. You've got a definition, right? Now let's start with our assumptions. What are the assumptions? Let's start with ... nothing. And bingo, you've got something very close to the veil of ignorance. You don't need to go on a long-winded rant about how allowing people to use knowledge of their station in life to determine their station in life is circular reasoning. Which is a weak argument anyways. You don't need to make that argument because it's obvious: any theorem you can construct using the fewest assumptions (knowledge) possible is automatically stronger than a theorem using more assumptions (more knowledge).
And at this point you introduce the analogue of self-consistency and contradiction. And of course you're talking about a meta-structure of morals. Not one morals but a whole set of morals (theories) dependent on what kind of knowledge (assumptions) you made to begin with. And you can reason about this meta-structure.
You can observe that this structure has minima everywhere, infinitely many of them, but only a small number of maxima. So obviously the maxima are more important. And for all you know, there's only one global maximum and at that point it becomes extremely important. And this is all automatic due to familiarity with mathematics. The things that are rare are those that matter more.
And so it "happens" that a society with the most extensive system of human rights possible is a just society. Not because we pulled it out of our ass like Rawls does but because this is what we label the maximum because we really care about that maximum and not because we have any preconceptions about what "justice" is.
Actually, we do have a preconception what justice is, it's what we would expect should happen. But how does this connect with Rawls' notion of "the most extensive human rights"? Oh right, because we're supposed to find these rights in our self-interest, because Rawls has written a propaganda book. Not because ... oh real people in a society must choose a notion of morality and mathematics tells us there's only one special case that stands out. Because hey, that would smack of inevitability and not self-interest. And we all want to keep self-interest in morality, right?
And that's Rawls' work in a nutshell. An attempt by a liberal to rationalize a communist idea, universal human rights, based on egotistic self-interest. Horribly misguided and boring too!
Transit Project Openings in 2024: A Global Review
10 months ago
6 comments:
You can observe that this structure has minima everywhere, infinitely many of them, but only a small number of maxima.
Minima/maxima with respect to what? Benefits? The size of the set?
That's an interesting question. It's gonna have to be the size of the set. The benefits can't be measured because there is no non-arbitrary way of aggregating people's preferences. And you can't measure the benefits for each agent and THEN aggregate them.
The only non-arbitrary weighing of benefits involves simply summing over every possible agent in the world of ideas. But I think the set of such agents is almost certainly unbounded.
So we're on size. And the minima with respect to size is obviously zero.
Actually that's another thing, if you took benefits as a measure then you would have problems ruling out torture worlds. It might SEEM like the obvious thing to do but you can't justify it on purely aesthetic grounds. And I despise the ad hoc crap that would be required otherwise.
Unless torture worlds simply didn't exist. Which is probably the case if masochists, submissives and suicides are excluded from the set of moral agents. Which I had assumed anyways.
BUT, I'm now considering the idea of letting the set of moral agents vary in the same way as the knowledge varies. It's obvious that ruling out masochists, submissives, and suicides would increase the number of moral rules. And ruling out sadists, dominants and killers would PROBABLY not increase the number of moral rules. They still want to avoid being victims more than they want to be victimizers.
And equally, it's obvious that limiting the set of moral agents to moral fanatics would NOT produce an insane result like "freedom of religion" (freedom to impose your insanity on others) since insanity is unpredictable. Only reality can serve as an intersection between all agents.
Which means that at first glance it's doable. You can vary the moral agents the same way as you can vary knowledge. Which MEANS that the lattice becomes a double lattice (not a big deal) and a semi-arbitrary assumption (moral agents can't be included that would if their values were universalized cause the destruction of civilization) is removed.
This is a pretty fuzzy argument you know, but I like it. Fuzzy semi-formal sure as fuck beats Rawls' propagandistic handwaving crap.
So we've got a huge structure of all these different systems (moralities).
Their size varies based on the amount of assumptions used to construct them. Noncontradiction acts as a sort of weeding function, which eliminates all the systems which aren't self-consistent. We don't cares about maths where 2 = 1 != 2 is true or A -> P & ~P.
And it seems like the fewer assumptions the better, because systems which use a large number of them don't make it through the weeding function as often - although this might not be universally true. But we want at least a few assumptions, because we're not looking for consistent systems which describe how rocks should interact with each other. Or how anything that can't think should interact with other things that can't think.
So we've eliminated the contradictory and decided to ignore all the really irrelevant systems.
Now we sort of graph (although not actually - I don't even know how it could be done) by the number of theorems they generate and yada maximum yada yada yada...
Have I understood your argument alright?
Yes. Although you can graph the systems easily since they're just an n-dimensional lattice. You graph by the assumptions (and agents) not by the size of the results.
Also, consistency has a special meaning for moral codes. A moral code is consistent if and only if the following conditions are satisfied:
1) all agents in the group can simultaneously act on the same moral rule
2) the outcome of acting on a moral rule is the same regardless of which agent applies it first
For instance, the "right to enslave" violates the second test of self-consistency. And the "right to have a slave" violates the first test. Same thing for right-libertarian style property and Lockean claptrap about "mixing of labour".
Morality is by definition about a GROUP of agents. The unit of moral reasoning is the group. The unit of application of morality is the group. If the group as a whole acts inconsistently or unpredictably due to morality then the morality is inconsistent.
Oh yeah, and the reason why fewer assumptions leads to more moral rules isn't because of inconsistency exactly. It's because with more knowledge, proposals for moral rules will never reach unanimity. If agents know how wealthy they are, then the rich are going to veto anything that takes wealth away from them. Even knowledge about the distribution of wealth is going to cause rules to be vetoed because they don't fit one or another agent's aspirations.
This is probably what you meant by non-contradiction. I usually don't think of it that way because I see the process of deliberation of rules by agents as being dynamic rather than static. That's probably just a holdover from Rawls' crappy book though.
Post a Comment