Sunday, June 22, 2014

A dilemma for Utilitarians

How do Utilitarians understand the judgments they derive from the Utilitarian calculus? Let’s work with an example. Here’s one:
G: It is morally right (required) to give 10% of your income to people who are less fortunate than yourselves (because doing so would maximize utility).
I’m not interested in whether G is true. Let’s assume it is. I’m interested in how Utilitarians understand G.

One way they might understand G is to say, well, G is a moral judgment. Moral judgments are (by definition) normative. They are claims or directives asserting that there are quite strong reasons for acting. G, then, can be roughly translated as
G1 “You have significantly weighty reason to give 10% of your income to the less fortunate.” Or,
G2 (“You must) give 10% of your income to the less fortunate.”
More: if these claims are genuinely moral, truly justified, then they can’t legitimately be simply shrugged off. So you can add to the translation above by saying
G3 “If you don’t give 10% of your income to the less fortunate, then you’re blameworthy.” And,
G4 “If you don’t give 10% of your income to the less fortunate, then you should feel guilty.”
What if I don’t agree that I have this significantly weighty reason? (I reflect and introspect about my reasons, and that one just ain’t there. How does the Utilitarian know better than me what reasons for action I have?) What if rather I think it would be really nice for me to do it, but it isn’t required and so guilt and blame would be inappropriate. I can understand how nice it would be, but I don’t understand anyone getting angry at me if I don’t. I mean I know (we’re supposing) that doing this will maximize utility, but why must I do that?

But on this first way of understanding G for a Utilitarian (where G implies G1-G4), my view about what reasons I have don’t matter. Only the Utilitarian calculus does (or the Utilitarian’s calculation of net aggregate happiness does). Utilitarians don’t care about my view about what I have reason to do. They don’t care about whether G1-G4 make sense from my considered point of view. But that seems oppressive. It seems less like a justified pronouncement of moral authority and more like authoritarianism – Utilitarians just bossing me around or using moral language to manipulate me into doing what they want me to do.

I have Utilitarian friends (well, can Utilitarians actually be friends?) who will deny that G means G1-G4. Instead they say nothing practical straightforwardly follows from G. It’s good, in some sense, when it happens that utility is maximized, but it turns out that moral judgments aren’t actually judgments about what people have reason to do. “Giving 10% of your income to the less fortunate is ‘right’” these Utilitarians would be saying, “but I don’t know what to tell you to do.” So, people who fail to give 10% of their income to the less fortunate would be failing to maximize utility, but that doesn’t mean they’re blameworthy or that they acted against really strong reasons for action. Therefore, the judgment isn’t authoritative; G is just a claim about what would maximize utility and so, according to Utilitarianism, the “right” action. But you might not have particularly strong reason to do the “right” action.

But if G is understood in this way, instead of being oppressive, it’s simply inert. These ‘moral’ judgments seem like abstract theoretical claims and don’t even claim normativity for themselves. This option is at least odd because morality is typically thought to be normative, playing an important practical role in human relations.

Either way, Utilitarianism seems like a pretty revisionist view, and not in a good way.

Kyle Swan
Department of Philosophy
Sacramento State

18 comments:

  1. Kyle, I think your choice of example actually is important in making your case plausible. Suppose the example was:

    It is morally right (required) for you to pull the drowning child out of 3 feet of water (because doing so would maximize utility).

    If you were to have used this example and followed it with the "Suppose I don't agree...argument" then we would just say that the fact that you don't agree shows there is something fundamentally wrong with you. You are not a moral person. Every moral person agrees that how our actions affect the balance of pain and pleasure in the world is relevant.

    The reason your example works somewhat is because most of us do not accept that we are morally required to maximize utility. We agree that it is morally exemplary (not just nice) but not morally obligatory. We believe that it is morally obligatory to abstain from actions that do net harm to others for our benefit and we agree that it is morally obligatory to help others when there is very little cost to us. But because Utilitarianism has quantified moral value and placed it on a continuum, it acknowledges the reality that there is no basis for drawing sharp lines that provide answers to our questions about obligation, duty and moral blameworthiness. It is, as you say, a revisionist theory. These are no longer the compelling questions.

    Although you don't say so, you seem to be operating (as Kant did) on the assumption that any rational person should be able to appreciate the force of moral reasons. Of course, you would need to say more about what you mean by rationality, but I think we know this not to be the case. In fact (as you know) we know from prisoner's dilemma, ultimatum and trust games that moral behavior is fundamentally irrational if rationality is defined in terms of self-interest.

    There are significant problems with Utilitarianism, but it is a huge step forward in our understanding of morality. The Utilitarians are right to identify our capacity to care intrinsically about the pain and pleasure of other creatures as central. It can be irrational, but it is one of the things that makes us moral creatures.

    ReplyDelete
  2. Well, let’s try your example, Randy.

    S: It is morally right (required) to pull a drowning child out of 3 feet of water.

    And so,

    S1: You have significantly weighty reason to pull the child out.
    S2: (You must) do so.
    S3/S4: If you don’t, then you’re blameworthy and should feel guilty.

    I think all that’s right, though not because pulling the child out maximizes utility. If someone didn’t agree that they should pull him out (that they have strong reason to, etc.) I admit I’d be suspicious. I suspect, in fact, that nearly all of us have beliefs, values and commitments that give us sufficient reason to save the child. It would be this fact, not the fact that it maximizes utility, which justifies the claim on us in a situation where we could easily save a drowning child. But that’s just to say that this example of a moral claim passes an important justificatory test -- one that the example I used doesn’t pass.

    I’m not saying that there’s a big bright line here that makes this always easy to apply. But there is, I think, a fundamentally important connection between the legitimacy of moral judgments and whether or not a given judgment supports certain characteristic moral emotions (like guilt and anger). So, right, the fact that most of us don’t accept that these apply in cases where you fail to maximize net aggregate utility (and do apply in cases where you fail to save a child drowning in 3 feet of water) is a difference-maker.

    I agree that utilitarians are right to advert to our capacity to care about others, but wrong to the extent they think we have the capacity to care about them in an impersonal and impartial way, and wrong to the extent that they think that a host of moral requirements apply to us even if we don’t have this capacity.

    ReplyDelete
  3. Kyle, I was very happy to read that you said, “utilitarians are right to advert to our capacity to care about others.” That is surely the basis of our finding the right thing to do. However, you said to Randy, “I agree that utilitarians are right to advert to our capacity to care about others, but wrong to the extent they think we have the capacity to care about them in an impersonal and impartial way.” It strikes me that this concept of “capacity to care” is overblown. If I don’t care about poor people, it is wrong for me to retreat to a defense of this by saying I just don’t have the capacity to care for them, and in that way try to represent my opponent as urging me to do something I cannot do.

    Of course I can do it. We can all care for people even if we don’t. We can be impartial even if we have natural tendencies not to be impartial.

    Back in the days of the 19th century England, the members of Parliament were making their voting decisions based on their “moral emotions (like guilt and anger)” and on the moral sense. Given their class position, those emotions and senses were a product of the fact that they didn’t care about poor people, and non-landed people, and non-white people. So, if they were queried and replied honestly, then they might say they just didn’t have the capacity to care for the lower classes. But that would be an error. They have the capacity; they just don’t want to use it. John Stuart Mill and other utilitarians argued that even if these members of Parliament did not care for the lower classes, nevertheless they ought to use the utilitarian calculus to guide their voting regarding social legislation about improving the lives of the lower classes. Despite the imperfections of utilitarianism, the calculus would have been useful in guiding these legislators to judgments that are moral judgments, even if the legislators’ sympathies and emotions and moral sense were leading them to vote against the social legislation. If the typical member of Parliament had responded to Mill that “there is, I think, a fundamentally important connection between the legitimacy of moral judgments and whether or not a given judgment supports certain characteristic moral emotions,” and if for that reason had said to Mill that voting for the social legislation that benefited the poor just didn’t “support their emotions” and so they were voting against the social legislation, then Mill would have seen their reasoning for what it was.

    ReplyDelete
    Replies
    1. Brad, this is really nicely put, thanks.

      Delete
    2. Brad, you moved pretty quickly from ‘don’t care in a completely impersonal and impartial way’ to ‘don't care about the poor’. The MP in your story who votes against the interests of the poor is probably violating a moral duty. Someone who walks past a child drowning in three feet of water probably is, too. But a typical person who buys his child a $50 computer game instead sending those $50 to an organization working to alleviate world poverty isn’t. I didn’t mean that the sense in which these judgments are supported by the moral emotions is by way of some vague and subjective moral sense. Maybe it would have been better if I had said something like “participant reactive attitudes” instead of “moral emotions”. I have in mind the same thing Mill does here (in Ch. 5 of Utilitarianism):

      “We do not call anything wrong, unless we mean to imply that a person ought to be punished in some way or other for doing it; if not by law, by the opinion of his fellow-creatures; if not by opinion, by the reproaches of his own conscience…. It is a part of the notion of Duty in every one of its forms, that a person may rightfully be compelled to fulfil it. Duty is a thing which may be exacted from a person, as one exacts a debt. Unless we think that it may be exacted from him, we do not call it his duty. Reasons of prudence, or the interest of other people, may militate against actually exacting it; but the person himself, it is clearly understood, would not be entitled to complain. There are other things, on the contrary, which we wish that people should do, which we like or admire them for doing, perhaps dislike or despise them for not doing, but yet admit that they are not bound to do; it is not a case of moral obligation; we do not blame them, that is, we do not think that they are proper objects of punishment. How we come by these ideas of deserving and not deserving punishment, will appear, perhaps, in the sequel; but I think there is no doubt that this distinction lies at the bottom of the notions of right and wrong; that we call any conduct wrong, or employ, instead, some other term of dislike or disparagement, according as we think that the person ought, or ought not, to be punished for it; and we say, it would be right, to do so and so, or merely that it would be desirable or laudable, according as we would wish to see the person whom it concerns, compelled, or only persuaded and exhorted, to act in that manner.”

      That's more important for a consideration of what our reasons for action are, and how we should rank them, than an abstract consideration of what would maximize utility is. Which reminds me of this great quote from Frances Kamm:

      “I tend to think that some of the philosophers who think that we have very large positive duties, but don’t live up to them, are not really serious. You can’t seriously believe that you have a duty to give almost all your money away to help others in need, or even a duty to kill yourself to save two people, as one of my former colleagues believes, and then, when we ask why you don’t live up to that, say, ‘Well, I’m weak. I’m weak.’ Because if you found yourself killing someone on the street to save $1000, you wouldn’t just say, ‘Well, I’m weak!’ You would realise you’d done something terribly wrong. You would go to great lengths not to become a person who would do that. That’s a serious sign that you believe you have a moral obligation not to kill someone. But when somebody says, ‘Our theory implies that you should be giving $1000 to save someone’s life and if you don’t do it, that it’s just as bad as killing someone,’ and he says, ‘I don’t give the $1000 because I’m weak!’, then I can’t believe he thinks that he really does have that obligation to aid or that his not aiding is equivalent to killing. Imagine him coming up to me and saying, ‘I just killed someone for $1000, but I’m weak!’ Gimme a break! This is ridiculous. There must be something wrong with that theory, or else there is something wrong with its proponents.”

      Delete
  4. Kyle, thanks for that. You seem to be fundamentally concerned with the question of obligation. In your response you've come back to the question what justifies the claim on us to save the child. But I agree that the principle of utility doesn't provide a calculus for determining our obligations. I agree with you that it is a revisionist theory that dispenses with many such notions. But I don't think you should argue that because it fails to answer this question that it fails to focus on what is morally relevant. I think it is obvious that the alleviation of suffering and the promotion of thriving in the world is fundamentally morally desirable and I think you do too.

    ReplyDelete
    Replies
    1. Right, Randy, the point of the post is that there’s a dilemma for a utilitarian understanding of moral obligations. I’m happy with the result that utlitarians have a fine (at first blush, at least, and as long as they stick with ordinal comparisons) account of “better” but a terrible account of obligation and reasons for action.

      But I’m not sure that utilitarians actually dispense with the notion of obligation. Maybe some do. At least many of them, though, will definitely say that, it’s not just better to save the child from drowning, but I’m obligated to do it. And I’m assuming they think this because doing so maximizes utility. They have, then, an account of obligation and reasons for action – just, as I say, a really oppressive and bad one.

      Regardless, it’s really, really important to have an account of obligation. This is related to why I think the second horn of the dilemma is so horn-y. It’s very unlikely that people would be able to have decent society with each other without rules and obligations (and I think, more specifically, deontic constraints) and so we need a way of determining which ones to have and how to determine what they will be. If you’re right that utilitarianism won’t be much help here, then that’s not good news for the view.

      Finally, to your last point, utilitarians don’t have a monopoly on an account of “better”. No plausible moral theory denies that consequences matter morally and some deontological theories explicitly make room for teleological reasons. But utilitarians say that consequences in the currency of pleasure or preference satisfaction are all that matter morally. That’s not true. Certain pleasures and instances of preferences satisfaction don’t have any moral value at all. And forcing some to be worse off for no other reason than to generate merely marginal gains in the aggregate is dickish behavior (I think that’s a technical term). So, I think there are better accounts of “better” out there than the utilitarian one.

      Delete
  5. Kyle, thanks, that's increasingly clear. I don't think you quite characterized your initial concern like that. You characterized it in terms of whether maximizing utility is a binding reason for action. Obligation is a specific form of that. I think you should just allow that it is such a reason, since all but psychopaths agree that it is, but that utility alone underdetermines our moral obligations.

    I think you are right that we can't dispense with the notion of moral obligation. I think we need a theory of contracts to deal with that.

    ReplyDelete
  6. I don't think I want to do that, but maybe I'm not clear about what you mean by a binding reason for action. I deny maximizing utility is one because I think we can pretty regularly, and without regret, fault or blame being appropriate, without worries about psychopathy, simply shrug off considerations of maximizing utility. Here's some evidence for that: at any given moment there's an optimific (relative to net aggregate pleasure) action I could perform. I do that action about .005% of the time, if that, as far as I know. *Shrug*

    I will allow that we generally have relatively weighty reasons to promote various kinds of goods. But as I said, you don't have to be a utilitarian or even a consequentialist to say that.

    ReplyDelete
  7. Hmmm, I'm not sure what the shrug indicates here. I tend to think the problem is (1) epistemic: Shrug, yeah, too bad I don't know what it is and (2) Shrug, yeah, and a lot of those would be really admirable, too, hats off to the moral heroes who do that sort of thing. But I don't accept: Shrug, what does that have to do with morality? Do you?

    ReplyDelete
    Replies
    1. When I wrote it the shrug just meant that the realization that I wasn't maximizing utility wasn't the source of any real sense of regret, fault, blame, etc. But I might even think the stronger (?) thing you say you don't accept. One reason is that I think there are pleasures that don't have moral value.

      Delete
  8. I agree with your shrugginess on the failure to maximize utility. There is such a thing as close enough, especially if you consider DMU. But rejecting the promotion of happiness and the alleviation of suffering as fundamental moral goods because there are some weird exceptions strikes me as not right.

    ReplyDelete
    Replies
    1. Me, too. The shrugging wasn't about that, though, only about MaxU. Again, we have relatively weighty reasons to promote various kinds of goods (and alleviate various kinds of bads), but promoting them will be subject to deontic constraints.

      Delete
  9. How do you think this fits with my piece on rationalization? Would it be friendly of me to say that Utilitariansm fails to rationalize (in a good way) ethics and that you see rationalization as a desirable outcome? The reason I think this is not obviously friendly, is that it permits me to construe this project as less an act of discovery (see our joint post on moral discovery) and more one of creation. In other words, you don't have to be interpreted as claiming (implausibly in my view, given that morality clearly evolved to satisfy the needs of cooperative animals and moral judgments spring from our moral emotions) that there currently is a fundamentally rational basis, but that to construct one, to construct a morality that we can all accept because it has a fundamentally rational basis, is a very worthy goal. What say ye?

    ReplyDelete
  10. I think I want to say yes, but I'm more concerned with Utiltiarianism's failure to justify moral claims (demands, requirements) than it's failure to rationalize them. Maybe I need to think more about the connection between justification and rationalization (in your sense). But right now I'm not sure what's supposed to be implausible about morality having a rational basis even on the view, which I think is correct, that morality evolved to satisfy the needs of cooperative animals like us with the moral emotions we have. I'm thinking that we have really strong reason to find ways to cooperate with each other. Does that fail as a rationalization? If not, it could be the rationalization and at least part of the justificatory story, too.

    ReplyDelete
  11. I think I really like what you are saying here. To my mind, the reason for thinking that the evolutionary origin of morality vitiates its rational basis depends on distinguishing between explaining it's origin and optimizing it. I think our evolved cooperative tendencies are, like our evolved cognitive capacities, a bag of tricks and heuristics for dealing efficiently with an array of practical problems associated with predicting and controlling the behavior of other beings. These methods are never optimal from a rational perspective, just like our cognitive biases and heuristics aren't optimal from a probabilistic perspective.

    But now we might say, ok, fine, let's let morality play the role of probability theory and expected value theory. Sure, cooperative behaviors originally evolved without any intent and without any deep rational principle guiding them, but now we have the ability to make this a conscious aim, and discover the means for promoting and optimizing it. Let's take this motley system given us by evolution, embrace cooperation as a goal that is good, and rationalize the system toward that end.

    Of course, there are still all sorts of questions, right? some of them your original questions. Cooperation itself isn't any kind of intrinsic good, anymore than maximizng utility (as you see it.) It depends on what end we are cooperating towards, right? And the bullying problem is still there, much discussed by Libertarian types. What if I don't want to cooperate? (I personally don't see how any normative theory doesn't end up oppressing someone.)

    But there might be some interesting solutions within this schema. For example, maybe the best cooperators are the ones that do not simply value the pain and pleasure of others as a means to the end of cooperation, but who value it intrinsically.

    ReplyDelete
  12. Two things: first, people like Robert Boyd, Peter Richerson, Herb Gintis, etc. talk about strong or moralistic reciprocity, a kind which is based on a source of motivation to follow and enforce moral rules for their own sake and independently of self-interest. They do, in fact, suggest that successful large-scale cooperative social relations would be impossible without it. Importantly, the type of reasoning is deontic, not goal-directed. Is that type of reasoning rational?

    That's the second thing. Jerry Gaus has developed a Hayekian line of response to this intellectual "felix culpa" worry that all our rule-following is irrational, but it's a good thing we do it. The idea from Hayek is that there's a mistake here in forgetting that human rationality itself is a product of evolution and our social interactions with each other. He says that reason not only shapes our social relations with each other; it's also shaped by it. Optimizing with respect to our moral reasoning is, therefore, likely to be fraught.

    ReplyDelete