Eric Raymond writes a very good post on Natural Rights and morality. The general approach he takes is the same as mine: utilitarianism sounds alright, but actually predicting the consequences of particular actions at particular moments is so damned hard that the only sensible way to do it is to get to a set of rules that seem to produce mainly good outcomes, and then treat them as if they were moral absolutes. Deep down, I know they’re not moral absolutes, but, as in other fields, a convenient assumption is the only way to make the problem tractable.
Like Raymond, I followed those principles to a libertarian conclusion. Well, to be completely honest, it’s more that I used those principles to justify the “natural rights” that I’d previously considered naively to be self-evident.
It’s still a big step. If you start from moral laws, you can always predict roughly where you’re going to end up. Using a consequentialist framework, even one moderated through a rules-system, there’s always a chance that you may change your mind about what set of proposed “moral absolutes” actually work best. That’s what happened to me.
I was particularly struck by a phenomenon where the more deeply and carefully I attacked a question rationally, the more my best answer resembled some traditional, non-rationalist, formulation. That led me to suspect that where my reasoning did not reach a traditionalist conclusion, I just wasn’t reasoning far enough.
That’s not particularly surprising. Ideas evolve. Richard Dawkins made a big deal of the fact that evolutionary success for an idea isn’t the same thing as success for the people who believe the ideas, and while that is a fair point in itself, I do not recall, at least from his writings back in the 80’s which I read avidly, him drawing a parallel with the well-known conclusion, made here by Matt Ridley via Brian Micklethwait, that in the very long run parasites do better by being less harmful to their hosts. By that principle, new religions (parasitic memeplexes) should be treated with fear and suspicion, while old ones are relatively trustworthy. Hmmm.
There are whole other layers to moral philosophy than this one of “selecting” rules. On one hand, utilitarianism is a slippery and problematic thing in the first place, and on the other side, moral rules, whether absolute laws or fake-absolute heuristics, have to be social to be meaningful, so the question of how they become socialised and accepted cannot be completely disentangled from what they should be. I am satisfied with my way of dealing with both these issues, but at the end of the day, I’m not that keen to write about it. When I think I’ve done moral philosophy well, I end up with something close to common sense. When I do it less well, I end up with things catastrophically worse than common sense. I therefore am inclined to rate common sense above philosophy when it comes to morality.
I therefore am inclined to rate common sense above philosophy when it comes to morality.
That's moral reservationism: "It's impossible to construct a system of ethics that improves on moral common sense. Any system that purports to do so is either (a) bogus, or (b) justifiable via common sense, and thus a special case of it."
In practice, most people's morality insofar as it actually exists is duty-based. They often talk a good game on utilitarianism, but that's not how they actually behave or really feel at the gut level. Instead they've got a set of duties in a hierarchy that they try in a spirit willing/flesh weak manner to satisfy. The proof of this is to compare money spent on pets vs starving children in 'fill in the blank here'.