Pages

Friday, May 25, 2018

What We Demand Of Each Other

In the last post I was thinking about Hipparchia's paradox. Hipparchia was a cynic philosopher who lived in Athens in the 4th and 3rd centuries BCE, and she posed the puzzle of why it wasn't OK for her to hit a guy called Theodorus, even though it would have been OK for him to hit himself and morality is supposed to be universalizable. I did try discussing the puzzle a bit, but what I mostly wanted was for moral philosophers to take the puzzle more seriously, to work out how their moral theories can accommodate it, and to start calling it 'Hipparchia's paradox'. I'm not really the kind of person I was hoping would think about it more, but I've been thinking about it a bit more anyway.

I mentioned that we can try resolving the paradox by saying that people were allowed to waive consideration of negative consequences to themselves in the moral evaluation of their own actions. And I worried that if these moral waivers are a thing, then there might be other kinds of moral waivers, and our final theory might end up looking unrecognizable as consequentialism. Some people will be fine with that, of course, but I've long been a bit of a fellow-traveller of (impartial, agent-neutral) consequentialism, and consequentialists (especially impartial agent-neutral ones) are probably the people Hipparchia's paradox is most of a puzzle for.

So, I've been thinking some more about these waivers, and what I'm thinking is that the reason they feel kind of scary is that they're an example of voluntarism1. Voluntarism is the idea that what's right and wrong is fixed in some special way by someone's will. What counts as the will and what counts as the relevantly special way is a bit up for grabs, and some of the disputes over voluntarism will be verbal. But it's not all verbal, and a certain kind of moral philosopher should be scared of voluntarism. A classic version of voluntarism is divine command theory, which is sometimes called theological voluntarism. Divine command theorists say this sort of thing:

  • When something is wrong, it's because it goes against God's will.
  • When something is wrong, it's because God has decreed that it's wrong.

Divine command theory isn't all that popular among moral philosophers nowadays, although it had a pretty good run with them in the middle ages, and it's still alive and well in the moral thinking of some religious people. Its detractors often view it as getting things backwards. Things aren't wrong because they go against God's will; God wants us not to do them because they're wrong. Similarly, when God says something's wrong, that's because it is wrong, not the other way round. This problem is called the Euthyphro problem, after the dialogue Plato wrote about it. The Euthyphro problem isn't just about divine command theory though; it applies in some way to all versions of voluntarism. People don't make things wrong by wanting them not to be done; they want them not to be done because they're wrong, or at least they should. The worry is that anyone adopting a version of voluntarism is taking the wrong side in the Euthyphro problem. That sounds like bad news for the waiver response to Hipparchia's paradox.

Nonetheless, I think it might be worth giving voluntarism another look, at least in the form of these waivers. There are two reasons. First, Hipparchia's paradox does provide a direct argument for waivers. Second, there's a big difference between God being the boss of us and us being the boss of us, or even better, the people our actions have an impact on being the boss of us. Arguments against divine voluntarism may well not carry over to this more worldly form of voluntarism. So now here's the next question: if waivers are a thing, which waivers are a thing? In the previous post I made a list of questions about possible waivers, and I'll repeat that list here, with a bit of commentary explaining why I thought they were worth asking.

  • Can I waive consideration of consequences to myself in the moral evaluation of someone else's actions?

The idea there is that if I volunteer to take one for the team, then it isn't wrong for the team to go along with that. Suppose you want to go to a party which will be pretty good, and I want to go to a different party which will be very good, but one of us has to stay home and wait for the plumber to come and fix the toilet. (We'll assume we'd both enjoy staying home equally.) What I'm suggesting is that if I volunteer to stay home, you don't wrong me in going along with this, even though I would probably enjoy my party more than you would enjoy yours. Now, you might disagree with this assessment of the situation. But the point is quite similar to Hipparchia's paradox: just as Theodorus is allowed to hit himself, people are allowed to sacrifice their own interests for others, even if the sacrifice is greater than the benefit. And even if they can't make the sacrifice without the co-operation of the beneficiaries, the beneficiaries don't do anything wrong in co-operating. If the person making the sacrifice says it's OK, then it's OK. (I'm not saying this is right, but this is the thinking behind the question.)

  • Can I waive consideration of some but not all negative consequences to myself?

I'm not sure I expressed this one as clearly as I could have, but here's what I'm thinking. Maybe it's OK for me to waive consideration of minor things, but not major things. Or maybe it's OK for me to waive consideration of forgone pleasures, but not of positive harms. I won't go into the details of what sorts of things I might not be morally allowed to do to myself, or other people might be wrong to do to me even with my permission, but there's a reasonably venerable tradition of thinking that there is such a distinction to be made. But if there is, you have to wonder what its basis might be.

  • Can I waive consideration of bad things happening to me even if someone else cares about me and so these would also be negative consequences to them?

Nobody is an island, and often if something bad happens to Theodorus, he's not the only person who suffers. If Theodorus hits himself, this might upset his friends, and maybe it's wrong because of that. I think there's a fair bit of pressure from common-sense morality to say that Theodorus hitting himself is nobody's business but his own, and if it bothers his friends then he's entitled to waive that fact from the moral evaluation of his action. There are probably limits to what common-sense morality permits along these lines, and maybe I'm getting common-sense morality wrong. But even if I'm not getting it wrong, I'm not really sure how this dynamic is supposed to work. One possibility is that waiving the harm Theodorus does you by hitting himself is partly constitutive of the very relationship in virtue of which Theodorus hitting himself harms you. While I do think this idea has some superficial appeal, I fear its appeal may be only superficial. But perhaps there's the germ of something workable in there.

  • Can I do this on an action-by-action basis, or at least a person-by-person basis, or do I have to waive it for all people or all actions if I waive it for one?

This is an issue about universalizability and fairness. How arbitrary am I allowed to be in dishing out permissions? One possibility is that we have a lot of latitude about what permissions we can give, but a lot less latitude about what permissions we should give. But I expect we also probably have a fair bit of latitude with the ones we should give, because these permissions are bound up with personal relationships, and we don't have personal relationships with everyone. In particular, waivers in personal relationships might often be part of a mutually beneficial reciprocal arrangement. Being morally in the wrong is bad for you, and personal relationships are difficult, and provided you're both trying hard it might be better not to be morally in the wrong every time you mess up. These waivers probably shouldn't have to be blanket waivers: a certain amount of mutual pre-emptive forgiveness doesn't make it impossible for you to to wrong each other.

  • Are there ever situations where someone can waive consideration of a negative consequence to someone other than themselves?

Part of the issue here is the nobody-is-an-island problem I discussed a couple of questions ago. But the issue also arises in the case of children, and other people who have someone else responsible for their welfare to an extent. It may also arise with God. I think it's quite possible that there just aren't any exceptions of this kind. You're allowed to take one for the team, but you're not allowed to have your children take one for the team. But here's an example I've been thinking about a bit. Suppose that you and I are doing a high-stakes pub quiz together, and we win a family trip to Disneyland. A reasonably fair thing to do would be for us to auction half of the trip between us and have the higher bidder pay the lower bidder for the lower bidder's half. But suppose I just tell you to go ahead and enjoy yourself. My family are losing out here as well as me, but somehow it still feels like I've done something nice, rather than robbing my family of the equivalent of half a trip to Disneyland. I think I'd probably end up coming down on the side of saying I'm wrong to give you my half of the trip, although perhaps the matter is complicated by the fact that my family weren't on the team, so it's my prize not theirs. But letting you have the trip does still put my family out. I'm really not sure what I think about this. But I think it's likely people do make this kind of collective sacrifice from time to time, and that they feel like they've done a good thing and not a bad thing.

I think that a moral theory that incorporated these kinds of waivers in a big way might have some mileage in it. There are plenty of worries about it, of course. I'll talk about two.

First, how freely does someone have to be giving these permissions? People make decisions with imperfect information and imperfect rationality, and they also make them under conditions of oppression. It's a common criticism of libertarian capitalism that letting people make whatever contracts they want will lead to a lot of inequality of outcome resulting from unequal bargaining positions. Most countries don't want the economically disempowered bargaining away their kidneys, and maybe we don't want people bargaining away the fact that harming them is wrong. I think some libertarian rhetoric makes it sound as if they think that contracts actually do have this wrongness-nullifying effect, but it's possible they don't say this, and if they do then I'm really not optimistic about them being right. You might be able to imagine idealized situations where the waivers look plausible, but the reality of it might look pretty hideous in some cases. And when you're doing ethics, hideousness detracts from plausibility.

My second worry is that constructing a theory of moral waivers might be joining what I think of as the excuses industry. Impartial consequentialism is notoriously demanding, especially in our interconnected world. But I don't think that should be surprising really: we don't expect it to be easy doing the right thing all the time. A long time ago I wrote about how the supposedly counterintuitive results of impartial consequentialism seemed to me to appeal to either selfishness, squeamishness, or a bit of both. I still feel the pull of that line of thought, and although I'm not really an impartial consequentialist myself, I am as I say a fellow traveller. Some people try to construct theories that don't have these demanding results, but I don't really want to be in the business of constructing theories that are basically elaborate excuses that allow us to live high while other people die. I hope that's not all I'd be doing, and I don't think it's all that other opponents of impartial consequentialism are doing, but I do think it's a trap you have to be careful not to fall into.

With those worries out in the open, I'll sketch the basic outline of the theory I've got in mind. You start with a background of some kind of impartial consequentialism, and then overlay the waivers. Morality might legitimate us making some very heavy demands on each other, but we don't have to actually make these demands. I guess the way it works is these waivers will create a category of supererogatory actions - actions which are good but not obligatory - which impartial consequentialism sometimes struggles to accommodate. If someone's waived a harm it's still better not to cause the harm, but it's not obligatory. I'm imagining the theory as being most distinctive in its treatment of morality within personal relationships. I mentioned earlier that some reciprocal waiving might be a common or even constitutive feature of some relationships. Perhaps it could be extended to involve relationships between people who don't know each other as well or at all, but who are members of the same community. If I was going to think seriously about that then I'd need to learn more about communitarian ethical theories. I'm really not very familiar with how they work, but from what I've heard they sound pretty relevant.

The post a few days ago closed with this argument against consequentialism:

  • Hipparchia's paradox shows that fully agent-neutral consequentialism is absurd.
  • The only promising arguments for consequentialism are arguments for fully agent-neutral consequentialism.
  • So there are no good arguments for consequentialism.

It's not great, really, and I said so at the time. But let's think about how all this waivers stuff started with Hipparchia's paradox. You could just look at the paradox and say "waivers are in, impartial consequentialism is out", and then merrily start constructing theories with waivers all over the place. I think that would be a mistake. An alternative, which I don't think would be the same kind of mistake, is to look at other candidates for waivers that are somehow similar to the original case. The best case I've got in mind is when a group of people see themselves as somehow on the same side, and so individual team-members' failures aren't moral failures, even though they could have done better and the other members of the team would have been better off. The team has a sufficient unity of purpose that the members view the team as analogous to an individual. Team members don't press moral charges against team members just as Theodorus doesn't press moral charges against himself.

One last thing about waivers is that you might share a lot of the intuitions about the examples but try to incorporate them within a straight impartial consequentialist theory. Maybe people being able to take advantage of the waivers without feeling guilty about it turns out to have the best consequences overall. There's a long tradition of moral philosophers doing this sort of thing. The main trick is to distinguish what it is for an action to be right or wrong with the information we use to decide whether an action is right or wrong. In John Stuart Mill's Utilitarianism he makes this move several times, and in fellow utilitarian RM Hare's Book Moral Thinking: Its Levels, Method, and Point those were the sorts of levels he was talking about. (At least if I remember them right.)

I used to be quite impressed with this move. Now I'm not so sure. The reason I've long been a fellow-traveller of impartial consequentialism without properly signing up is that I'm also a bit of an error theorist. I read JL Mackie's Ethics at an impressionable age, and while I'm a lot more sceptical about philosophical conclusions than I used to be, it's still got some pull for me. But maybe the reason it's got all this pull is because I'm thinking about moral facts in terms of what Henry Sidgwick (I think) called 'the point of view of the universe'. On that topic I read something once about the conflict between deontological and consequentialist ethics that stayed with me. The point was that deontologists shouldn't argue that sometimes there are actions we should do even though things will be worse if we do them. To concede that is to concede too much to the consequentialist: it makes things too easy for them if that's the position they have to attack. The consequentialist needs to earn the claim that there's some meaningful way in which consequences, or the whole world, can be good or bad simpliciter rather than just good or bad from the point of view of a person, or a particular system of rules, or perhaps something else. It's true: the consequentialist needs this claim and the deontologist doesn't. It's non-trivial and the deontologist should make the consequentialist earn it. And I don't think they have earned it. I can't remember where I read this, unfortunately. I'd thought it was in a Philippa Foot paper, but I re-read the two papers I thought it might be in (Foot 1972 and 1995), and while re-reading them was rewarding I couldn't find it in either. I still think it's probably her. If you can tell me where it's from, please do so in the comments. [UPDATE 24/6/18: She makes the point in Foot 1985: 'Utilitarianism and the virtues'.]

Anyway, maybe error theory wouldn't have the same pull for me if I got away from the idea of the point of view of the universe and instead thought about morality as being fundamentally about human relationships, collective decision-making and what have you. The levels move takes the manifest image of morality and explains it in terms of something more systematic at a lower level. But this systematic lower level is where the point of view of the universe is, and that's what threatens to turn me into an error theorist. The manifest image is where the human relationships and collective decision-making are, and maybe those aren't so weird. The dilemma arises because the lower level stuff about how good the universe is can seem more plausibly normative, while the higher level stuff about relationships is more plausibly real.

Of course, as things stand with the theory I've got in mind there's still an impartial consequentialist background with the waivers laid on top. The impartial consequentialist background is as weird as ever, and you can't have a moral theory that's all waivers. But maybe this could be a transitional step on the way to me having a more accurate conception of what moral facts are facts about, and perhaps eventually losing interest in moral error theory altogether. That might be nice.

Notes

[1] I'm a little unsure about the terminology here. It's pretty established to call divine command theory 'theological voluntarism', and I'm fairly sure I've seen 'voluntarism' used more generally to include non-theological versions like the waiver theory I'm talking about here. But 'voluntarism' also seems to be used to refer to theories according to which the moral properties of an action depend on the will with which the action was performed. (This idea is important in Kant's ethics.) The two ideas could overlap, but it's not obvious that they have to. So if you've got strong views about what 'voluntarism' means and think I'm using it wrong, then I apologize. And when you're discussing this blogpost with your friends, you should be careful how you use the word. But I don't know another word for the thing I'm talking about, and I think I've heard people calling it 'voluntarism', so that's the call I've made.

References

  • Foot, P. 1972: 'Morality as a system of hypothetical imperatives', Philosophical Review 81 (3):305-316
  • Foot, P. 1985: 'Utilitarianism and the virtues', Mind 94 (374):196-209
  • Foot, P. 1995: 'Does moral subjectivism rest on a mistake?', Oxford Journal of Legal Studies 15 (1):1-14
  • Hare, R. M. 1981: Moral Thinking: Its Levels, Method, and Point (Oxford University Press)
  • Mackie, J. L. 1977: Ethics: Inventing Right and Wrong (Penguin Books)
  • Mill, J. S. 1863/2004: Utilitarianism, Project Gutenberg ebook #11224, http://www.gutenberg.org/files/11224/11224-h/11224-h.htm
  • Plato c.399-5 BCE/2008: Euthyphro, trans. Benjamin Jowett, Project Gutenberg ebook #1642, http://www.gutenberg.org/files/1642/1642-h/1642-h.htm

No comments:

Post a Comment