Thursday, January 31, 2013

Answer to Jake II - Ethics

Jake's comments on my post in answer to him were fantastically well done, and I urge you to go read them. Due to the limitations of the Blogger comment system Jake noted there, I'm going to put up my reply as a series of blog posts rather than attempting to respond only in the comment thread. One unfortunate aspect of that it is that visually, it gives my argument more weight than it deserves, and Jake's less. Please do discount that as much as you can. The first response (on ethical issues) is below the fold. Response on legal issues to follow in another post. Thanks again to Jake for a wonderful discussion.
I am a descriptive moral relativist. To steal a line from the Wikipedia page, this simply means that I “admit that it is incorrect to assume that the same moral or ethical frameworks are always in play in all historical and cultural circumstances.”

Okay. Thanks to that helpful clarification, I think the differences between your view and moral realism (in the realism as opposed to nominalism sense of “realism”) may not cash out to much pragmatically. You might end up thinking that circumstances change the moral foundations, but a Catholic casuist who believes in unchanging moral foundations is still going to think that circumstances change the application of the moral principles. So you’d end up calling some action the result of a changed morality, the realist casuist will call it the result of a differing application of an unchanged morality, but you might end up taking the same action. So far, not so worrisome.

I go a step further than strict descriptive moral relativism, and say that the basis for morality is relative as well. I mean something very specific here- I mean that what it means to be “moral” is entirely contingent on what it means to be “human.” That is to say, if humans were different, by quirk of evolution or God, then morality itself would be different.

Thomism actually agrees with you there. Natural law takes human nature as its starting point, notes the telos implied by this nature, and attempts to describe how this nature can flourish and attain its telos: Our nature is that we are rational animals. Our telos is to attain the beatitude of dwelling with God. Thomists would say that any rational animal, by virtue of being a rational animal, will have that same telos.

This is a minor distinction, but quite important in practice, particularly when it comes to the debate we are currently engaged in. Whether or not homosexuality is morally acceptable is not a question of “do the parts fit?” or “is this relationship ordered towards an external goal?”
This is where we part company. In addition to being a realist about morals, a Thomist is an Aristotelian realist about essences: humans are rational form (soul) united with matter (body). It’s not so much a question of what goal (telos) the acts are ordered towards, as it is of what goal the human actor is essentially ordered towards. Any rational animal, no matter in what society (or on what planet, for that matter) he or she is situated, is intrinsically directed toward union with God (what Aristotle was intuiting in his discussion of the “contemplation of the good” as the goal of a rational animal in the Ethica Nicomachea) as his or her goal simply by being a rational creature.
Rather, it is a question of “does this maximize utility function X that I am trying to maximize?” (we shall get to my utility function later, but “human happiness” is a reasonable approximation.)
As virtue ethicists, Thomists will say that human happiness flows from acquiring a virtuous nature capable of union with God, not from maximizing utility in some hedonic calculus. However, as above, the difference between our views might not cash out to much pragmatically, depending on how much the practice of Thomist virtue ethics resembles the maximization of your utility function.
Firstly, that whether something is right or wrong is a matter of reason, and derivable from whatever first principles you are using. Picking different first principles will (obviously) yield different results, but once you establish your first principles, there are indeed right and wrong answers (or more accurately, there is a hierarchical ranking of sub-optimal choices. Some things are “more wrong” than others.) I will revisit my first principles shortly.
Sure. Given any set of axioms, some deductions are valid and some aren’t.
Secondly, I am not convinced (and indeed, I find it demonstrably untrue) that morality is the same for each person. What makes me happy is not necessarily the same as what makes someone else happy, and treating us in the exact same way may have very different moral implications.
Again, I’m happy to say that I’m not sure this cashes out to much in practice besides semantics. Thomists will just say that although morality is unchanging, the casuistical application will vary depending upon temperament and circumstances.
Case in point, I am not same-sex attracted. Telling me that I’m not allowed to marry another man is no burden to me- I wouldn't have wanted to do so anyways. But to someone who is exclusively SSA, this proscription is a much bigger deal. Clearly the moral _consequences_ are different, but I go a step further and say that the morality of such a proscription _changes_ based on the character of the two individuals being affected (me and the imaginary-SSA-person). This flows quite naturally from my earlier claim that the basis for morality is relative to the agents being affected.

Again, this could be rephrased as a difference in application. However, I think the distinction between the Thomist assertion that there is not an ultimate difference in rational animal intrinsic essential nature is important here in why we end up disagreeing. The SSA person is still a rational animal, ordered toward the beatific vision. Disordered acts will no more help the SSA human flourish than any other human. However, there are going to be differences in terms of the need for compassion and solidarity with the SSA person that another human might not need in this area, and differences of applied moral practice in terms of, e.g., whether living in a same-sex religious community is going to be an occasion of sin or not.

TL;DR: I think it’s not my moral realism that’s leading to our disagreement here, so much as it is the realism essences (e.g., “rational animal”) from which the teleological morality flows. We disagree about what human nature is, and consequently about what the good for humans is. Given that disagreement, I’d venture that each of us is correctly deducing our preferred applications of morality to circumstances. We’re just staring from different axioms. About which more in the next comment.

First Principles:

I have three clear axioms that I appeal to unapologetically. I accept them as brute facts about reality
Fair enough.
1) Reason is valid
Sure. A theist wants to ask foundational questions here about why you accept this as a brute fact in the absence of a metaphysical framework in which it makes sense, but I doubt that road will take us anywhere of mutual interest.
2) The subjective experience of other humans matters.

Even though you can put together some decent game-theory reasons for why we should accept First Principle 2 as true even if it’s not, in practice I really am just accepting it as a brute fact. If I’m being totally honest, the game-theory-evolutionary-psychology angle seems like a more robust description of the way the world _actually_ works, but it would lead me down a path of selfish objectivism, which is not a way I want to live my life (I am in the unfortunate position of thinking that knowing and acting on the truth will probably make my life worse. It is an uncomfortable one for a secular empiricist.)

I concede that the game-theory reasons for a tit-for-tat retributive justice ethic are rock solid. Mercy seems to require non-game-theoretical grounding to be a valid moral principle and not just a misfiring over-application of evolved kin-and-kith-directed compassion and empathy. Given your extremely admirable candor about your qualms here, I’m not inclined to belabor this.
3) When in doubt, pick freedom First Principle 3 is admittedly poorly worded, but I hope the idea comes across. It seems to me that I have a much stronger argument here than I did for First Principle 2, because as a human, I have some special revelation as to what it is that makes the human experience better. My empirical first person experience is that freedom is paramount to human flourishing. All of my other beliefs (so far as I can tell) derive from these first principles and empiricism.
J.S. Mill’s experience led him to a similar faith in free citizens’ ability to flourish in an unfettered marketplace of ideas and lifestyles. However, your “special revelation” is idiosyncratically subjective: others (Chesterton and Pascal, e.g.) have seen original sin as the most subjectively self-evident fact about human nature. Subjective awareness of human sinfulness and frequent incapacity for self-mastery has led distastefully (to me) rightist Catholic thinkers like de Maistre and Bonald to think, in brief, “When in doubt, pick authoritarianism.” I think Mill and de Maistre are both telling us something important about prudent politics. However, both visions strike me as incomplete. Thomism’s project of a systematically essentialist account of human nature and its myriad temperaments and societal contexts includes both, and employs deduction from human essence to correct for the biases in the temperament (libertine, authoritarian) of any particular philosopher.

Utility Function:

This is just a note to say that I acknowledge that my moral system runs into problems (as all moral systems do) when confronted with competing goods or competing evils. We all exercise some implicit utility function to resolve such issues. I haven’t formalized mine, but it ranks freedom and happiness very high, and tradition, authority, and homogeny very low.
I have inevitable quibbles here. But given your axioms, this is all sensible.
I hope I’ve demonstrated why I think I can make moral claims despite the fact that morality is, in practice, relative. Moral imperatives around public safety derive from principle 2. My “libertarian assumptions about morally acceptable spheres of state action” derive from principle 3. I can answer other objections you raised specifically if it remains unclear how I derive those positions from my first principles.
Well, you can make any moral claims you want, but if your three axioms aren’t grounded, then the claims you deduce from them aren’t warranted by any foundation beyond those axioms being brute facts you’ve chosen to accept. However, that’s not really a problem with “moral relativism,” since you do see your three axioms, as far as I can tell, as universally binding. Given your axioms, though, I think your deductions are sound. Further, any disagreement I have about whether your acceptance of these three axioms is warranted is ultimately a disagreement about theism and about metaphysics, which would take us far afield indeed from civil SSM. So I’ll just happily concede that your position is respectable despite my disagreements with it, and leave it there.

No comments:

Post a Comment