Pages

Sunday, September 10, 2017

Closing The Altruism Loophole

The prisoner’s dilemma is one of the best known puzzles in game theory. Here’s a version of it.


Two criminals, Alice and Betty, have been captured and imprisoned in separate cells. The guards want them to talk. If one talks and the other doesn’t, the talker goes free and the non-talker gets a long sentence. If both talk, both get mid-length sentences. If neither talks, both get short sentences. Alice and Betty only care about the lengths of their own sentences. Should they talk?


Whatever Alice does, Betty does better if she talks. Whatever Betty does, Alice does better if she talks. So if they’re acting self-interestedly, talking is a no-brainer. But both talking works out worse for each of them than neither talking. The point is that it seems self-interest alone should be able to get them from the both-talk situation to the neither-talk situation, because it’s better for both of them. But it also seems there’s no rational way for them make this happen. That’s the puzzle.


One thing people sometimes suggest is that the solution is to be altruistic. The fact that if Alice talks and Betty doesn’t then Betty will get a longer sentence gives Alice a reason not to talk, if she cares about Betty. In a way, people bringing this up is annoying. It’s either a misunderstanding of the problem or a refusal to engage with the problem. Part of the set-up is that Alice and Betty only care about the lengths of their own sentences. But on the other hand, the prisoner’s dilemma is supposed to be structurally similar to some real-life situations, and in real life people do care about each other, at least a bit. Also, we sometimes like to do experiments to see how people behave in real-life prisoner’s dilemma situations. If the prisoner’s dilemma has self-interested subjects and our test subjects are somewhat altruistic, as people tend to be, then we’re testing it wrong.


There are at least three ways round the problem. One is to make Betty a less sympathetic character, who cares about something Alice doesn’t care about at all. One option I’ve heard is to make Betty a robot who only cares about increasing the number of paperclips in the world. Alice’s payoffs are money, and Betty’s payoffs are paperclips. But this introduces an asymmetry into the situation, and it also means we’re not dealing with two humans anymore; we’re dealing with a robot. And the robot doesn’t behave according to general principles of rationality; it behaves how we’ve programmed it to behave. If we can’t formulate a principle, we’ll struggle to program the robot to follow it. If we tell the robot to apply the dominance reasoning, it’ll talk. If we tell the robot to assume everyone picks the same option in symmetrical situations with no indistinguishable pairs of options, it won’t talk. (This principle is very close to what Wikipedia calls superrationality.) We don’t learn anything from this. It’d be better if we could test it with people.


A second way to try to avoid the altruism loophole is to set the payoffs so the participants would have to be very altruistic for it to affect what they did.




Betty



Talks
Doesn’t talk
Alice
Talks
Alice gets £2
Betty gets £2
Alice gets £7
Betty gets nothing

Doesn’t talk
Alice gets nothing
Betty gets £7
Alice gets £3
Betty gets £3


Suppose Betty talks. By not talking Alice would give up her only £2 to get Betty an extra £5. That would be awfully nice of Alice. Supposing Betty doesn’t talk, by not talking Alice would give up an additional £4 so Betty could keep her £3. That seems rather nice of her too. If Alice truly loves Betty as she loves herself, she probably won’t talk however we set it up, or at least she won’t know which to do because she doesn’t know what Betty will do. (Since the total payoff in nobody-talks is higher than in both-talk, not talking must increase the total either when the other doesn’t talk, when they do, or both.) But most people don’t love the other participant as they love themselves, and fiddling with the payoffs can make it so that more altruism is needed not to talk.


A third way is cleaner. I hadn’t heard it before, so when I came up with it I thought I’d tell you about it. The problem was that Alice might allow her behaviour to be affected by concern for what happens to Betty. To avoid this, we start by roping in three other people Alice cares about just as much as Betty (let’s just assume none of the participants know anything much about the others). There are four possible outcomes for Betty, so we randomly divide up the three outcomes Betty avoids among the three other participants. Since Alice is indifferent between Betty, Bertha, Bernice and Belinda, she doesn’t care which prize goes to Betty as it’s still the same four prizes distributed among the same four people. The only variable left for Alice to care about it what happens to her. Similarly, we divide the outcomes Alice avoids among Althea, Annabel and Albertine, so Betty will be indifferent between outcomes except insofar as they affect Betty. Alice and Betty won’t keep quiet out of altruism now, and if they can’t think of another reason to keep quiet, they’ll end up both talking and wishing neither of them had.

So, introducing the other people closes the altruism loophole. I guess it doesn’t close the justice loophole, if there is one. The problem there is that Alice might not talk because she is concerned that Betty might not either and she doesn’t want to punish Betty for doing her a favour. Or maybe Alice will think she has some special responsibility towards Betty as a fellow player. But at least we’ve closed the simplest version of the altruism loophole. If we haven’t tried testing the prisoner’s dilemma this way, I guess we should. Maybe we’ll get different results. Or maybe we’ll get results we’d previously attempted to explain through altruism, and we won’t be able to explain them away through altruism anymore. Of course, we may already be doing this. I don’t know. It's not my area.

No comments:

Post a Comment