Back to the index

Reasons and Persons

Table of Contents

Reasons and Persons by Derek Parfit


This is a thorough book. Parfit is not looking to overcome the is/ought dilemna here. He is not looking for an objective morality in the world. Instead, he tries to find structural issues in certain moral theories that might allow us to reject them. He also looks at how well theories meet our intuitions.

Because of the vary nature of morality, there are few hard conclusions here, we may however decide we prefer some theories to others based on how well they achieve their aims, how well they deal with certain dilemnas, how consistent they seem.

Despite this, Parfit seems optimistic that moral nihilism may be wrong. Sadly, an argument for or against moral scepticism was out of the scope of the book, so there is no justification. This looks to be covered in his other book 'On what matters':

There is another ground for doubting Moral Scepticism. We should not assume that the objectivity of Ethics must be all-or-nothing. There may be a part of morality that is objective. In describing this part, our claims may be true. When we consider this part of morality, or these moral questions, we may find the Unified Theory that would remove our disagreements. There may be other questions about which we shall never agree. There may be no true answers to these questions. Since objectivity need not be all-or-nothing, moral sceptics may be partly right. These questions may be subjective. But this need not cast doubt on the Unified Theory.

By looking at how theories handle prisoner's dilemnas, time, personal identity, future lives, imperceptable harms, etc. he concludes that M (common sense morality) and S (time-insensitive self-interest) should be rejected. Better theories might be C (consequentialism, which is both time and agent neutral) and P (present-aim theory, which is neither agent nor time-neutral).

To follow desire-fulfillment P an agent tries to satisfy their current desires (it need not be desire-based, a hedonistic version would have them maximising their current happiness).

At the other end of the scale, a hedonistic C aims to maximise the hapiness of all over all of time.

S would sit in the middle, trying to maximise the hapiness of one agent over their entire life.

P does not mean that the agent doesn't care at all about the future, they may currently desire that their future self live a comfortable life. P and S do conflict in some cases however, for example where our desires change over the course of our life. If a future desire conflicted with our present desire, S might cause us to sacrifice our current desires to increase our lifetime desire-fulfillment. P would not. (hedonistic P and S conflict frequently, for example hedonistic P would have me eating cake frequently, whereas S might prefer that I eat vegetables instead, to ensure a long happy life)

How common the cases are partly depends upon our theory about self-interest. As I claim in Appendix C, these cases are more common on the Hedonistic Theory, less common on the Success Theory. The cases may be rarest on the Unrestricted Desire-Fulfilment Theory. On this theory, the fulfilment of any of my desires counts directly as being in my interests. What is most in my interests is what would best fulfil, or enable me to fulfil, all of my desires throughout my whole life. Will this always be the same as what would best fulfil my present desires, if I knew the truth and was thinking clearly? There may be some people for whom these two would always coincide. But there are many others for whom these two often conflict. In the lives of these people, S often conflicts with P, even if S assumes the Unrestricted Desire-Fulfilment Theory. S and P conflict because these people's strongest desires are not the same throughout their lives.

The discussion of coordination-problems (ie. Moloch) was especially interesting.

There are also many interesting nuggets, such as how Christianity uses the afterlife to ensure that morality and self-interest would give the same rational decisions:

It has been assumed, for more than two millennia, that it is irrational for anyone to do what he knows will be worse for himself. Christians have assumed this since, if Christianity is true, morality and self-interest coincide. If wrongdoers know that they will go to Hell, each will know that, in acting wrongly, he is doing what will be worse for himself. Christians have been glad to appeal to the Self-interest Theory, since on their assumptions S implies that knaves are fools.

Excellent book! There is much more in here than I have covered. I am sure I will come back to it in future.


Loc: 257 We can describe all theories by saying what they tell us to try to achieve. According to all moral theories, we ought to try to act morally. According to all theories about rationality, we ought to try to act rationally. Call these our formal aims. Different moral theories, and different theories about rationality, give us different substantive aims. By ‘aim', I shall mean ‘substantive aim'.

Loc: 283 I can now re-state the central claim of S. This is (S1) For each person, there is one supremely rational ultimate aim: that his life go, for him, as well as possible.

Where S is self-interest theory

Loc: 287 If we call some theory T, call the aims that it gives us our T-given aims. Call T indirectly individually self-defeating when it is true that, if someone tries to achieve his T-given aims, these aims will be, on the whole, worse achieved.

Loc: 331 If we are not Hedonists, we need a different example. Suppose that I am driving at midnight through some desert. My car breaks down. You are a stranger, and the only other driver near. I manage to stop you, and I offer you a great reward if you rescue me. I cannot reward you now, but I promise to do so when we reach my home. Suppose next that I am transparent, unable to deceive others. I cannot lie convincingly. Either a blush, or my tone of voice, always gives me away. Suppose, finally, that I know myself to be never self-denying. If you drive me to my home, it would be worse for me if I gave you the promised reward. Since I know that I never do what will be worse for me, I know that I shall break my promise. Given my inability to lie convincingly, you know this too. You do not believe my promise, and therefore leave me stranded in the desert. This happens to me because I am never self-denying. It would have been better for me if I had been trust-worthy, disposed to keep my promises even when doing so would be worse for me. You would then have rescued me. (It may be objected that, even if I am never self-denying, I could decide to keep my promise, since making this decision would be better for me. If I decided to keep my promise, you would trust me, and would rescue me. This objection can be answered. I know that, after you have driven me home, it would be worse for me if I gave you the promised reward. If I know that I am never self-denying, I know that I shall not keep my promise. And, if I know this, I cannot decide to keep my promise. I cannot decide to do what I know that I shall not do. If I can decide to keep my promise, this must be because I believe that I shall not be never self-denying. We can add the assumption that I would not believe this unless it was true. It would then be true that it would be worse for me if I was, and would remain, never self-denying. It would be better for me if I was trustworthy.)

So a follower of S would ensure that he is not always self-denying for example

Loc: 452 He threatens that, unless he gets the gold in the next five minutes, he will start shooting my children, one by one. What is it rational for me to do? I need the answer fast. I realize that it would not be rational to give this man the gold. The man knows that, if he simply takes the gold, either I or my children could tell the police the make and number of the car in which he drives away. So there is a great risk that, if he gets the gold, he will kill me and my children before he drives away. Since it would be irrational to give this man the gold, should I ignore his threat? This would also be irrational. There is a great risk that he will kill one of my children, to make me believe his threat that, unless he gets the gold, he will kill my other children. What should I do? It is very likely that, whether or not I give this man the gold, he will kill us all. I am in a desperate position. Fortunately, I remember reading Schelling's The Strategy of Conflict. I also have a special drug, conveniently at hand. This drug causes one to be, for a brief period, very irrational. Before the man can stop me, I reach for the bottle and drink. Within a few seconds, it becomes apparent that I am crazy. Reeling about the room, I say to the man: ‘Go ahead. I love my children. So please kill them.' The man tries to get the gold by torturing me. I cry out: ‘This is agony. So please go on.' […] On any plausible theory about rationality, it would be rational for me, in this case, to cause myself to become for a period irrational. This answers the question that I asked above. S might tell us to cause ourselves to be disposed to act in ways that S claims to be irrational. This is no objection to S. As the case just given shows, an acceptable theory about rationality can tell us to cause ourselves to do what, in its own terms, is irrational.

Similar to throwing the wheel out of your car when playing chicken. You commit to ignoring threats from the opponent

Loc: 601 On the assumptions made above, S would tell us to change our beliefs. S would tell us to believe, not itself, but a revised form of S. On this revised theory, it is irrational for each of us to do what he believes will be worse for himself, except when he is keeping a promise. If S told us to believe this revised theory, would this be an objection to S? Would it show that it is rational to keep such promises? We must focus clearly on this question. We may be right to believe that it is rational to keep our promises, even when we know that this will be worse for us. I am asking, ‘Would this belief be supported if S itself told us to cause ourselves to have this belief?' Some people answer Yes. They argue that, if S tells us to make ourselves have this belief, this shows that this belief is justified. And they apply this argument to many other kinds of act which, like keeping promises, they believe to be morally required. If this argument succeeded, it would have great importance. It would show that, in many kinds of case, it is rational to act morally, even when we believe that this will be worse for us. Moral reasons would be shown to be stronger than the reasons provided by self-interest. Many writers have tried, unsuccessfully, to justify this conclusion. If this conclusion could be justified in the way just mentioned, this would solve what Sidgwick called ‘the profoundest problem of Ethics'


Loc: 612 There is a simple objection to this argument. The argument appeals to the fact that S would tell us to make ourselves believe that it is rational to keep our promises, even when we know that this will be worse for us. Call this belief B. B is incompatible with S, since S claims that it is irrational to keep such promises. Either S is the true theory about rationality, or it is not. If S is true, B must be false, since it is incompatible with S. If S is not true, B might be true, but S cannot support B, since a theory that is not true cannot support any conclusion. In brief: if S is true, B must be false, and if S is not true, it cannot support B. B is either false, or not supported. So, even if S tells us to try to believe B, this fact cannot support B.

Loc: 629 Suppose that we are all both transparent and never self-denying. If this was true, it would be better for me if I made myself a threat-fulfiller, and then announced to everyone else this change in my dispositions. Since I am transparent, everyone would believe my threats. And believed threats have many uses. Some of my threats could be defensive, intended to protect me from aggression by others. I might confine myself to defensive threats. But it would be tempting to use my known disposition in other ways. Suppose that the benefits of some co-operation are shared between us. And suppose that, without my co-operation, there would be no further benefits. I might say that, unless I get the largest share, I shall not co-operate. If others know me to be a threat-fulfiller, and they are never self-denying, they will give me the largest share. Failure to do so would be worse for them. Other threat-fulfillers might act in worse ways. They could reduce us to slavery. They could threaten that, unless we become their slaves, they will bring about our mutual destruction. We would know that these people would fulfil their threats. We would therefore know that we can avoid destruction only by becoming their slaves. The answer to threat-fulfillers, if we are all transparent, is to become a threat-ignorer. Such a person always ignores threats, even when he knows that doing so will be worse for him. A threat-fulfiller would not threaten a transparent threat-ignorer. He would know that, if he did, his threat would be ignored, and he would fulfil this threat, which would be worse for him. If we were all both transparent and never self-denying, what changes in our dispositions would be better for each of us? I answer this question in Appendix A, since parts of the answer are not relevant to the question I am now discussing. What is relevant is this. If we were all transparent, it would probably be better for each of us if he became a trustworthy threat-ignorer. These two changes would involve certain risks; but these would be heavily outweighed by the probable benefits. What would be the benefits from becoming trustworthy? That we would not be excluded from those mutually advantageous agreements that require self-denial. What would be the benefits from becoming threat-ignorers? That we would avoid becoming the slaves of threat-fulfillers.

Loc: 675 Is my act rational? It is not. As before, we might concede that, since I am acting on a belief that it was rational for me to acquire, I am not irrational. More precisely, I am rationally irrational. But what I am doing is not rational. It is irrational to ignore some threat when I know that, if I do, this will be disastrous for me and better for no one. S told me here that it was rational to make myself believe that it is rational to ignore threats, even when I know that this will be worse for me. But this does not show this belief to be correct. It does not show that, in such a case, it is rational to ignore threats. We can draw a wider conclusion. This case shows that we should reject (G2) If it is rational for someone to make himself believe that it is rational for him to act in some way, it is rational for him to act in this way.

Loc: 692 It has been argued that, by appealing to such facts, we can solve an ancient problem: we can show that, when it conflicts with self-interest, morality provides the stronger reasons for acting. This argument fails. The most that it might show is something less. In a world where we are all transparent—unable to deceive each other—it might be rational to deceive ourselves about rationality.8

Loc: 725 It may help to mention a similar distinction. The medical treatment that is objectively right is the one that would in fact be best for the patient. The treatment that is subjectively right is the one that, given the medical evidence, it would be most rational for the doctor to prescribe. As this example shows, what it would be best to know is what is objectively right. The central part of a moral theory answers this question. We need an account of subjective rightness for two reasons. We often do not know what the effects of our acts would be. And we ought to be blamed for doing what is subjectively wrong. We ought to be blamed for such acts even if they are objectively right. A doctor should be blamed for doing what was very likely to kill his patient, even if his act in fact saves this patient's life.

Loc: 743 C claims (C5) The best possible motives are those of which it is true that, if we have them, the outcome will be best. […] To apply C, we must ask what makes outcomes better or worse. The simplest answer is given by Utilitarianism. This theory combines C with the following claim: the best outcome is the one that gives to people the greatest net sum of benefits minus burdens, or, on the Hedonistic version of this claim, the greatest net sum of happiness minus misery. […] With the word ‘Consequentialism', and the letter ‘C', I shall refer to all these different theories. As with the different theories about self-interest, it would take at least a book to decide between these different versions of C. This book does not discuss this decision. I discuss only what these different versions have in common. My arguments and conclusions would apply to all, or nearly all, the plausible theories of this kind. It is worth emphasizing that, if a Consequentialist appeals to all of the principles I have mentioned, his moral theory is very different from Utilitarianism. Since such theories have seldom been discussed, this is easy to forget.

Loc: 767 Some have thought that, if Consequentialism appeals to many different principles, it ceases to be a distinctive theory, since it can be made to cover all moral theories. This is a mistake. C appeals only to principles about what makes outcomes better or worse. Thus C might claim that it would be worse if there was more deception or coercion. C would then give to all of us two common aims. We should try to cause it to be true that there is less deception or coercion. Since C gives to all agents common moral aims, I shall call C agent-neutral. Many moral theories do not take this form. These theories are agent-relative, giving to different agents different aims. It can be claimed, for example, that each of us should have the aim that he does not coerce other people. On this view, it would be wrong for me to coerce other people, even if by doing so I could cause it to be true that there would be less coercion. Similar claims might be made about deceiving or betraying others. On these claims, each person's aim should be, not that there be less deception or betrayal, but that he himself does not deceive or betray others. These claims are not Consequentialist. And these are the kinds of claim that most of us accept. C can appeal to principles about deception and betrayal, but it does not appeal to these principles in their familiar form.

ie. Deontology

Loc: 777 I shall now describe a different way in which some theory T might be self-defeating. Call T indirectly collectively self-defeating when it is true that, if several people try to achieve their T-given aims, these aims will be worse achieved. On all or most of its different versions, this may be true of C.

Loc: 784 happiness is a large part of what makes outcomes better. Most of our happiness comes from having, and acting upon, certain strong desires. These include the desires that are involved in loving certain other people, the desire to work well, and many of the strong desires on which we act when we are not working. To become pure do-gooders, we would have to act against or even to suppress most of these desires. It is likely that this would enormously reduce the sum of happiness. This would make the outcome worse, even if we always did what, of the acts that were possible for us, made the outcome best. It might not make the outcome worse than it actually is, given what people are actually like. But it would make the outcome worse than it would be if we were not pure do-gooders, but had certain other causally possible desires and dispositions.

A fence to giving everything, since if you give too much you begin to reduce your own well-being more than you improve that of others. Of course, in the current world, most people who live in developped countries would need to get very very close to giving everything to hit this fence.

Loc: 791 One rests on the fact that, when we want to act in certain ways, we shall be likely to deceive ourselves about the effects of our acts. We shall be likely to believe, falsely, that these acts will produce the best outcome. Consider, for example, killing other people. If we want someone to be dead, it is easy to believe, falsely, that this would make the outcome better. It therefore makes the outcome better that we are strongly disposed not to kill, even when we believe that doing so would make the outcome better. Our disposition not to kill should give way only when we believe that, by killing, we would make the outcome very much better. Similar claims apply to deception, coercion, and several other kinds of act.

Hense utilitarianism with rights, as discussed by Cowen in Stubborn Attachments

Loc: 797 in these and other ways, C is indirectly collectively self-defeating. If we were all pure do-gooders, the outcome would be worse than it would be if we had certain other sets of motives. If we know this, C tells us that it would be wrong to cause ourselves to be, or to remain, pure do-gooders. Because C makes this claim, it is not failing in its own terms. C does not condemn itself. […] S is indirectly individually self-defeating when it is true of some person that, if he was never self-denying, this would be worse for him than if he had some other set of desires and dispositions. This would be a bad effect in S's terms. And this bad effect often occurs. There are many people whose lives are going worse because they are never, or very seldom, self-denying.

Loc: 817 even if we became convinced that Consequentialism was the best moral theory, most of us would not in fact become pure do-gooders. Because he makes a similar assumption, Mackie calls Act Utilitarianism ‘the ethics of fantasy'. Like several other writers, he assumes that we should reject a moral theory if it is in this sense unrealistically demanding: if it is true that, even if we all accepted this theory, most of us would in fact seldom do what this theory claims that we ought to do. Mackie believes that a moral theory is something that we invent. If this is so, it is plausible to claim that an acceptable theory cannot be unrealistically demanding. But, on several other views about the nature of morality, this claim is not plausible. We may hope that the best theory is not unrealistically demanding. But, on these views, this can only be a hope. We cannot assume that this must be true.

Loc: 857 Even if I gave nine-tenths, some of my remaining tenth would do more good if spent by the very poor. Consequentialism thus tells me that I ought to give away almost all my income. Collective Consequentialism is much less demanding. It does not tell me to give the amount that would in fact make the outcome best. It tells me to give the amount which is such that if we all gave this amount, the outcome would be best. More exactly, it tells me to give what would be demanded by the particular International Income Tax that would make the outcome best. This tax would be progressive, requiring larger proportions from those who are richer. But the demands made on each person would be much smaller than the demands made by C, on any plausible prediction about the amounts that others will in fact give. It might be best if those as rich as me all give only half their income, or only a quarter. It might be true that, if we all gave more, this would so disrupt our own economies that in the future we would have much less to give. And it might be true that, if we all gave more, our gift would be too large to be absorbed by the economies of the poorer countries.

Why on earth would you go for this theory, beyond rationalising why you don't need to actually give away all your stuff?

Loc: 867 C might require that a few people give away almost all their money, and try to make themselves pure do-gooders. But this would only be because most other people are not doing what C claims that they ought to do. They are not giving to the poor the amounts that they ought to give. In its partial compliance theory, C has been claimed to be excessively demanding. This is not the claim that C is unrealistically demanding. As I have said, I believe that this would be no objection. What is claimed is that, in its partial compliance theory, C makes unfair or unreasonable demands. This objection may not apply to C's full compliance theory. C would be much less demanding if we all had one of the possible sets of motives that, according to C, we ought to try to cause ourselves to have.

Loc: 924 We could imagine other motives that would have made the outcome even better. But, given the facts about human nature, such motives are not causally possible. Since Clare loves her child, she saves him rather than several strangers. We could imagine that our love for our children would ‘switch off' whenever other people's lives are at stake. It might be true that, if we all had this kind of love, this would make the outcome better. If we all gave such priority to saving more lives, there would be few cases in which our love for our children would have to switch off. This love could therefore be much as it is now. But it is in fact impossible that our love could be like this. We could not bring about such ‘fine-tuning'. When there is a threat to our children's lives, our love could not switch off merely because several strangers are also threatened.

Also kills the concept of unconditional love, which might reduce attachment/hapiness. Could add amnesia to get around this?

Loc: 1,050 These four claims assume that rationality and rightness can be inherited, or transferred. If it is rational or right for me either to cause myself to be disposed to act in some way, or to make myself believe that this act is rational or right, this act is rational or right. My examples show that this is not so. Rationality and rightness cannot be inherited in this way. In this respect the truth is simpler than these claims imply.

Loc: 1,097 These claims should affect our answer to the question whether it would make the outcome better if we all ceased to believe C. We might believe correctly that there is some other moral theory belief in which would, in the short run, make the outcome better. But once Consequentialism has effaced itself, and the cord is cut, the long-term consequences might be much worse. This suggests that the most that could be true is that C is partly self-effacing. It might be better if most people caused themselves to believe some other theory, by some process of self-deception that, to succeed, must also be forgotten. But, as a precaution, a few people should continue to believe C, and should keep convincing evidence about this self-deception. These people need not live in Government House, or have any other special status. If things went well, the few would do nothing. But if the moral theory believed by most did become disastrous, the few could then produce their evidence. When most people learnt that their moral beliefs were the result of self-deception, this would undermine these beliefs, and prevent the disaster.

Sounds like a good book


Loc: 1,344 The Self-interest Theory gives to different agents different aims. Could this theory be directly individually self-defeating? The aim that S gives to me is that my life goes, for me, as well as possible. I successfully follow S when I do what, of the acts that are possible for me, will be best for me. Could it be certain that, if I successfully follow S, I will thereby make the outcome worse for me? This is not possible.

Loc: 1,353 Can theories be directly collectively self-defeating? Suppose that Theory T gives to you and me different aims. And suppose that each could either (1) promote his own T-given aim or (2) more effectively promote the other's. The outcomes are shown below. Such cases have great practical importance. The simplest cases may occur when (a) Theory T is agent-relative, giving to different agents different aims, (b) the achievement of each person's T-given aims partly depends on what others do, and (c) what each does will not affect what these others do.


These three conditions often hold if T is the Self-interest Theory. S is often directly collectively self-defeating. These cases have a misleading name taken from one example. This is the Prisoner's Dilemma.

Loc: 1,404 Though we can seldom know that we face a Two-Person Prisoner's Dilemma, we can very often know that we face Many-Person Versions. And these have great practical importance. The rare Two-Person Case is important only as a model for the Many-Person Versions. We face a Many-Person Dilemma when it is certain that, if each rather than none of us does what will be better for himself, this will be worse for everyone.


Loc: 1,409 One Many-Person Case is the Samaritan's Dilemma. Each of us could sometimes help a stranger at some lesser cost to himself. Each could about as often be similarly helped. In small communities, the cost of helping might be indirectly met. If I help, this may cause me to be later helped in return. But in large communities this is unlikely. It may here be better for each if he never helps. But it will be worse for each if no one ever helps. Each might gain from never helping, but he would lose, and lose more, from never being helped.

Free riders

Loc: 1,433 Many-Person Dilemmas are, I have said, extremely common. One reason is this. In a Two-Person Case, it is unlikely that the Negative Condition holds. This may need to be specially ensured, by prison-officers, or game-theorists. But in cases that involve very many people, the Negative Condition naturally holds. It need not be true that each must act before learning what the others do. Even when this is not true, if we are very numerous, what each does would be most unlikely to affect what most others do.


Loc: 1,442 Many Contributor's Dilemmas involve two thresholds. In these cases, there are two numbers v and w such that, if fewer than v contribute, no benefit will be produced, and if more than w contribute, this will not increase the benefit produced. In many of these cases we do not know what others are likely to do. It will then not be certain that, if anyone contributes, he will benefit others. It will be true only that he will give to others an expected benefit. One extreme case is that of voting, where the gap between the two thresholds may be the gap of a single vote. The number w is here v + 1.

Can solve with things like kickstarter, where we promise money in the case that everyone else also contributes. David Friedman discusses similar contracts and limitations in some cases in Machinery of Freedom

Loc: 1,447 Others need co-operative efforts. When in large industries wages depend on profits, and work is unpleasant or a burden, it can be better for each if others work harder, worse for each if he himself does. The same can be true for peasants on collective farms. A third kind of public good is the avoidance of an evil. The contribution needed here is often self-restraint. Such cases may involve Commuters: Each goes faster if he drives, but if all drive each goes slower than if all take buses; Soldiers: Each will be safer if he turns and runs, but if all do more will be killed than if none do; Fishermen: When the sea is overfished, it can be better for each if he tries to catch more, worse for each if all do; Peasants: When the land is overcrowded, it can be better for each if he or she has more children, worse for each if all do. […] There are countless other cases. It can be better for each if he adds to pollution, uses more energy, jumps queues, and breaks agreements; but, if all do these things, that can be worse for each than if none do. It is very often true that, if each rather than none does what will be better for himself, this will be worse for everyone.

Moloch, very intesting problem that government is meant to solve

Loc: 1,461 Suppose that each is disposed to do what will be better for himself, or his family, or those he loves. There is then a practical problem. Unless something changes, the actual outcome will be worse for everyone. This problem is one of the chief reasons why we need more than laissez-faire economics—why we need both politics and morality.

Surely, if I see that this is the case, I should look for methods to ensure that everyone does the thing, for example my proposing a contract where all of us must do the thing. (People often criticise this sort of idea as 'reinventing government')

Loc: 1,464 And let us take as understood the words ‘or his family, or those he loves'. Each has two alternatives: E (more egoistic), A (more altruistic). If all do E that will be worse for each than if all do A. But, whatever others do, it will be better for each if he does E. The problem is that, for this reason, each is now disposed to do E. This problem will be partly solved if most do A, wholly solved if all do. […] In solution (1), the self-benefiting choice is made impossible. This is sometimes best. In many Contributor's Dilemmas, there should be inescapable taxation. But (1) would often be a poor solution. Fishing nets could be destroyed, soldiers chained to their posts. Both have disadvantages.

1 is to make E impossible

  1. is a less direct solution. E remains possible, but A is made better for each. There might be a system of rewards. But, if this works, all must be rewarded. It may be better if the sole reward is to avoid some penalty. If this works, no one pays. If all deserters would be shot, there may be no deserters.

2 makes A better than E

Loc: 1,482 An alternative is (1), where it is made impossible for people to have more than two children. This would involve compulsory sterilization after the birth of one's second child. It would be better if this sterilization could be reversed if either or both of one's children died. Such a solution may seem horrendous. But it might receive unanimous support in a referendum. It would be better for all the people in some group if none rather than all have more than two children. If all prefer that this be what happens, all may prefer and vote for such a system of compulsory sterilization. If it was unanimously supported in a referendum, this might remove what is horrendous in the compulsion. And this solution has advantages over a system of rewards or penalties. As I have said, when such a system is not wholly effective, those with more children must pay penalties, as must their children. It may be better if what would be penalized is, instead, made impossible.

Loc: 1,489 (1) and (2) are political solutions. What is changed is our situation. (3) to (5) are psychological. It is we who change. This change may be specific, solving only one Dilemma. The fishermen might grow lazy, the soldiers might come to prefer death to dishonour, or be drilled into automatic obedience. Here are four changes of a more general kind: We might become trustworthy. Each might then promise to do A on condition that the others make the same promise. We might become reluctant to be ‘free-riders'. If each believes that many others will do A, he may then prefer to do his share. We might become Kantians. Each would then do only what he could rationally will everyone to do. None could rationally will that all do E. Each would therefore do A. We might become more altruistic. Given sufficient altruism, each would do A. These are moral solutions. Because they might solve any Dilemma, they are the most important psychological solutions.

Loc: 1,501 It is not enough to know which solution would be best. Any solution must be achieved, or brought about. This is often easier with the political solutions. Situations can be changed more easily than people. But we often face another, second-order, Contributor's Dilemma. Few political solutions can be achieved by a single person. Most require co-operation by many people. But a solution is a public good, benefiting each whether or not he does his share in bringing it about. In most large groups, it will be worse for each if he does his share. The difference that he makes will be too small to repay his contribution. This problem may be small in well-organized democracies. It may be sufficient here to get the original Dilemma widely understood.

Loc: 1,507 The problem is greater when there is no government. This is what worried Hobbes. It should now worry nations. One example is the spread of nuclear weapons. Without world-government, it may be hard to achieve a solution.

Loc: 1,521 If we are numerous, unanimity will in practice be hard to obtain. If our only moral motive is trustworthiness, we shall then be unlikely to achieve the joint conditional agreement. It would be likely to be worse for each if he joined. (We shall also be unlikely even to communicate.)

Ideally we could exclude people who do not join the agreemet, this is not always possible though

Loc: 1,531 The fourth moral solution is sufficient altruism. I am not referring here to pure altruism. Pure altruists, who give no weight to their own interests, may face analogues of the Prisoner's Dilemma. It can be true that, if all rather than none do what is certain to be better for others, this will be worse for everyone. By ‘sufficient altruism' I mean sufficient concern for others, where the limiting case is impartial benevolence: an equal concern for everyone, including oneself.


Loc: 1,556 On the Share-of-the-Total View, each produces his share of the total benefit. Since we five save a hundred lives, each saves twenty lives. Less literally, the good that each does is equivalent to the saving of this many lives. […] (C6) An act benefits someone if its consequence is that someone is benefited more. An act harms someone if its consequence is that someone is harmed more. The act that benefits people most is the act whose consequence is that people are benefited most. […] The First Mistake in moral mathematics is the Share-of-the-Total View. We should reject this view, and appeal instead to (C6).

Loc: 1,603 (The Second Mistake) If some act is right or wrong because of its effects, the only relevant effects are the effects of this particular act. This assumption is mistaken in at least two kinds of case. In some cases, effects are overdetermined. Consider Case One. X and Y simultaneously shoot and kill me. Either shot, by itself, would have killed.

Loc: 1,611 (C7) Even if an act harms no one, this act may be wrong because it is one of a set of acts that together harm other people. Similarly, even if some act benefits no one, it can be what someone ought to do, because it is one of a set of acts that together benefit other people.

Loc: 1,636 This objection shows the need for another claim. In Case Three it is true that, if both X and Y had acted differently, I would not have been harmed. But this does not show that X and Y together harm me. It is also true that, if X, Y, and Fred Astaire had all acted differently, I would not have been harmed. But this does not make Fred Astaire a member of a group who together harm me. We should claim (C8) When some group together harm or benefit other people, this group is the smallest group of whom it is true that, if they had all acted differently, the other people would not have been harmed, or benefited.

In the example, If X had not poisoned me in the first place, then Y's action would not have killed. Fred is not involved either, only X is morally responsible.

Loc: 1,697 We can usually ignore a very small chance. But we should not do so when we may affect a very large number of people, or when the chance will be taken a very large number of times.

Loc: 1,699 A similar point applies if an act is likely or certain to give to others very small benefits. We should not ignore such benefits when they would go to a very large number of people. This large number roughly cancels out the smallness of the benefits. The total sum of benefits may thus be large.

Shut up and multiply

Loc: 1,874 The Fifth Mistake in moral mathematics is the belief that imperceptible effects cannot be morally significant. This is a very serious mistake. When all the Harmless Torturers act, each is acting very wrongly. This is true even though each makes no one perceptibly worse off. The same could be true of us. We should cease to think that an act cannot be wrong, because of its effects on other people, if this act makes no one perceptibly worse off. Each of our acts may be very wrong, because of its effects on other people, even if none of these people could ever notice any of these effects. Our acts may together make these people very much worse off.

Pollution etc.

Loc: 1,895 Our alternative is to appeal to what these fishermen together do. Each fisherman knows that, if he and all the others do not restrict their catches, they will together impose upon themselves a great total loss. Rational altruists would believe these acts to be wrong. They would avoid this disaster. It may be said: ‘So would rational egoists. Each knows that, if he does not restrict his catch, he is a member of a group who impose upon themselves a great loss. It is irrational to act in this way, even in self-interested terms.' As I shall argue in the next chapter, this claim is not justified. Each knows that, if he does not restrict his catch, this will be better for himself. This is so whatever others do. When someone does what he knows will be better for himself, it cannot be claimed that his act is irrational in self-interested terms.

Yes. Classic Libertarian dilemna

Loc: 1,901 The Commuter's Dilemma. Suppose that we live in the suburbs of a large city. We can get to and return from work either by car or by bus. Since there are no bus-lanes, extra traffic slows buses just as much as it slows cars. We could therefore know the following to be true. When most of us are going by car, if any one of us goes by car rather than by bus, he will thereby save himself some time, but he will impose on others a much greater total loss of time.


Loc: 1,908 Rational altruists would avoid this result. As before, they could appeal either to the effects of what each person does, or to the effects of what all together do. Each saves himself some time, at the cost of imposing on others a much greater total loss of time. We could claim that it is wrong to act in this way, even though the effects on each of the others would be trivial. We could instead claim that this act is wrong, because those who act in this way together impose on everyone a great loss of time. If we accept either of these claims, and have sufficient altruism, we would solve the Commuter's Dilemma, saving ourselves much time every day.

Do paid roads help? They impose a cost that could be redistributed to bus users to offet that harm.

Loc: 1,919 Common-Sense Morality works best in small communities. When there are few of us, if we give to or impose on others great total benefits or harms, we must be affecting other people in significant ways, that would be grounds either for gratitude, or resentment. In small communities, it is a plausible claim that we cannot have harmed others if there is no one with an obvious complaint, or ground for resenting what we have done.

Yes, so keep communities small!


Loc: 1,946 Some say that no one does what he believes will be worse for him. This has been often refuted. Others say that what each does is, by definition, best for him. In the economist's phrase, it will ‘maximize his utility'. Since this is merely a definition, it cannot be false. But it is here irrelevant. It is simply not about what is in a person's own long-term self-interest. Others say that virtue is always rewarded. Unless there is an after-life, this has also been refuted. Others say that virtue is its own reward. On the Objective List Theory, being moral and acting morally may be one of the things that make our lives go better. But, on the plausible versions of this theory, there could be cases where acting morally would be, on the whole, worse for someone. Acting morally might deprive this person of too many of the other things that make our lives go better.

(on self interest)

Loc: 1,958 The problem is this. We may have moral reasons to make the altruistic choice. But it will be better for each if he makes the self-benefiting choice. Morality conflicts with self-interest. When these conflict, what is it rational to do? On the Self-interest Theory, it is the self-benefiting choice which is rational. If we believe S, we shall be ambivalent about self-denying moral solutions. We shall believe that, to achieve such solutions, we must all act irrationally. Many writers resist this conclusion. Some claim that moral reasons are not weaker than self-interested reasons. Others claim, more boldly, that they are stronger. On their view, it is the self-benefiting choice which is irrational. This debate may seem unresolvable. How can these two kinds of reason be weighed against each other? Moral reasons are, of course, morally supreme. But self-interested reasons are, in self-interested terms, supreme. Where can we find a neutral scale?

Surely both are arbitrary, so we cannot? They do not exist in the world, so how do we measure them?

Loc: 1,975 We could answer: ‘No. The pursuit by each of self-interest is better for him. It succeeds. Why is S collectively self-defeating? Only because the pursuit of self-interest is worse for others. This does not make it unsuccessful. It is not benevolence.' If we are self-interested, we shall of course deplore Prisoner's Dilemmas. These are not the cases loved by classical economists, where each gains if everyone pursues self-interest. We might say: ‘In those cases, S both works and approves the situation. In Prisoner's Dilemmas, S still works. Each still gains from his own pursuit of self-interest. But since each loses even more from the self-interested acts of others, S here condemns the situation.'

Loc: 2,024 Does this show that S is failing in its own terms? It may seem so. And it is tempting to contrast S with morality. We might say, ‘The Self-interest Theory breeds conflict, telling each to work against others. This is how, if everyone pursues self-interest, this can be bad for everyone. Where the Self-interest Theory divides, morality unites. It tells us to work together—to do the best we can. Even on the scale provided by self-interest, morality therefore wins. This is what we learn from Prisoner's Dilemmas. If we cease to be self-interested and become moral, we do better even in self-interested terms.' This argument fails. We do better, but each does worse. If we both do A rather than E, we make the outcome better for each, but each makes the outcome worse for himself. Whatever the other does, it would be better for each if he did E. In Prisoner's Dilemmas, the problem is this. Should each do the best he can for himself? Or should we do the best we can for each? If each does what is best for himself, we do worse than we could for each. But we do better for each only if each does worse than he could for himself.

Loc: 2,033 There might be cases where, if each does better in this theory's terms, we do worse, and vice versa. Call such cases Each-We Dilemmas. A theory can produce such Dilemmas even if it is not concerned with what is in our interests. Consequentialist theories cannot produce such Dilemmas. As we saw in Section 21, this is because these theories are agent-neutral, giving to all agents common aims.

Loc: 2,045 At the collective level—as an answer to the question, ‘How should we all act?'—the Self-interest Theory would condemn itself. Suppose that we are choosing what code of conduct will be publicly encouraged, and taught in schools. S would here tell us to vote against itself. If we are choosing a collective code, the self-interested choice would be some version of morality.

! See my thoughts on moral nihilism and possible utilitarian conspiracy

Loc: 2,048 S is universal, applying to everyone. But S is not a collective code. It is a theory about individual rationality. This answers the smaller question that I asked above. In Prisoner's Dilemmas, S is individually successful. Since it is only collectively self-defeating, S does not fail in its own terms. S does not condemn itself.

Loc: 2,052 It may help to introduce another common theory. This tells each to do what will best achieve his present aims. Call this the Present-aim Theory, or P.

Loc: 2,063 Here is a trivial but, in my case, true example. At each time I will best achieve my present aims if I waste no energy on being tidy. But if I am never tidy this may cause me at each later time to achieve less. And my untidiness may frustrate what I tried to achieve at the first time. It will then be true, as it sadly is, that being never tidy causes me at each time to achieve less.

Loc: 2,077 As these claims suggest, Each-We Dilemmas are a special case of an even wider problem. Call these Reason-Relativity Dilemmas. S produces Each-We Dilemmas because its reasons are agent-relative. According to S, I can have a reason to do what you can have a reason to undo. P produces Intertemporal Dilemmas because its reasons are time-relative. According to P, I can have a reason now to do what I shall later have a reason to undo.

Loc: 2,105 Many moral theorists make a second claim. They believe that certain reasons are not agent-relative. They might say: ‘The force of a reason may extend, not only over time, but over different lives. Thus, if I have a reason to relieve my pain, this is a reason for you too. You have a reason to relieve my pain.'56 The Self-interest Theorist makes the first claim, but rejects the second. He may find it hard to defend both halves of this position. In reply to the moralist, he may ask, ‘Why should I give weight to aims which are not mine?' But a Present-aim Theorist can ask, ‘Why should I give weight now to aims which are not mine now?'

Loc: 2,298 The second point is that this can matter in an agent-relative way. It will help to remember the Self-interest Theory. In Prisoner's Dilemmas, this theory is directly self-defeating. If all rather than none successfully follow S, we will thereby cause the S-given aims of each to be worse achieved. We will make the outcome worse for everyone. If we believe S, will we think that this matters? Or does it only matter whether each achieves his formal aim: the avoidance of irrationality? The answer is clear. S gives to each the substantive aim that the outcome be, for him, as good as possible. The achievement of this aim matters. And it matters in an agent-relative way. If we believe S, we shall believe that it matters that, in Prisoner's Dilemmas, if we all follow S, this will be worse for each of us. Though they do not refute S, these cases are, in self-interested terms, regrettable. In claiming this, we need not appeal to S's agent-neutral form: Utilitarianism. The Self-interest Theory is about rationality rather than morality. But the comparison shows that, in discussing Common-Sense Morality, we need not beg the question. If it matters whether our M-given aims are achieved, this can matter in an agent-relative way.

Loc: 2,323 those who believe S may claim that this is irrelevant. They can say: The Self-interest Theory does not claim to be a collective code. It is a theory of individual rationality. To be collectively self-defeating is, in the case of S, not to be damagingly self-defeating.' Can we defend Common-Sense Morality in this way? This depends upon our view about the nature of morality, and moral reasoning. On most views, the answer would be No. On these views, morality is essentially a collective code—an answer to the question ‘How should we all act?' An acceptable answer to this question must be successful at the collective level. The answer cannot be directly collectively self-defeating. If we believe in Common-Sense Morality, we should therefore revise this theory so that it would not be in this way self-defeating. We should adopt R.

Loc: 2,329 Kant's view about the nature of moral reasoning. Assume that I am facing one of my Parent's Dilemmas. Could I rationally will that all give priority to their own children, when this would be worse for everyone's children, including my own? The answer is No. For Kantians, the essence of morality is the move from each to we. Each should do only what he can rationally will that we all do. A Kantian morality cannot be directly collectively self-defeating.


Loc: 2,334 Other writers hold Constructivist views about the nature of morality. A morality is, for them, something that a society creates, or what it would be rational for a society's members to agree to be what governs their behaviour. This is another kind of view on which an acceptable moral theory cannot be directly collectively self-defeating.

Loc: 2,405 In Prisoner's Dilemmas, the Self-interest Theory is directly collectively self-defeating. In these cases, if we all pursue self-interest, this will be worse for all of us. It would be better for all of us if, instead, we all acted morally. Some writers argue that, because this is true, morality is superior to the Self-interest Theory, even in self-interested terms. As I showed in Chapter 4, this argument fails. In these cases S succeeds at the individual level. Since S is a theory of individual rationality, it need not be successful at the collective level. When this argument is advanced by believers in Common-Sense Morality, it back-fires. It does not refute S, but it does refute part of their own theory. Like S, Common-Sense Morality is often directly collectively self-defeating. Unlike S, a moral theory must be collectively successful. These M-believers must therefore revise their beliefs, moving from M to R.

Loc: 2,431 Because C is an agent-neutral theory, it is indirectly self-defeating, and it therefore needs to include Ideal and Practical Motive Theories that, in the sense defined above, roughly correspond to M. Because M is an agent-relative theory, it is often directly self-defeating, and it therefore needs to be revised so that its Ideal and Practical Act Theories are in part Consequentialist. C and M face objections that can be met only by enlarging and revising these theories, in ways that bring them closer together. These facts naturally suggest an attractive possibility. The arguments in Chapters 1 and 4 support conclusions that may dovetail, or join together to make a larger whole. We might be able to develop a theory that includes and combines revised versions of both C and M. Call this the Unified Theory. […] The arguments in Chapters 1 and 4 both point towards a Unified Theory. But developing this theory, in a convincing way, would take at least a book. That book is not this book. I shall merely add some brief remarks. […] Since the Unified Theory would include a version of C, it may be objected that it would be too demanding. But this objection may also be partly met by C's Reaction Theory. Return to the question of how much those in richer nations should give to the poor. Since others will in fact give little, C claims that each of the rich ought to give almost all his income. If the rich give less, they are acting wrongly. But if each of the rich was blamed for failing to give nearly all his income, blame would cease to be effective. The best pattern of blame and remorse is the pattern that would cause the rich to give most. Since this is so, C might imply that the rich should be blamed, and should feel remorse, only when they fail to give a much smaller part of their incomes, such as one tenth.

This seems a bit of a stretch

Loc: 2,478 Many people are moral sceptics: believing that no moral theory can be true, or be the best theory. It may be hard to resist scepticism if we continue to have deep disagreements. One of our deepest disagreements is between Consequentialists and those who believe in Common-Sense Morality. The arguments in Chapters 1 and 4 reduce this disagreement. If we can develop the Unified Theory, this disagreement might be removed. We might find that, in Mill's words, our opponents were ‘climbing the hill on the other side'. Because our moral beliefs no longer disagreed, we might also change our view about the status of these beliefs. Moral scepticism might be undermined.

The issue of the basis for C or U remains. They are unfalsifyable, they do not exist in the world. There is no 'true' theory. In the book Parfit is looking for the 'best' theory – ie the most internally consistant theory that also matches many people's intuitions. This does not make it 'true', only close to this arbitrary goal


Loc: 2,533 The Instrumental and Deliberative versions, which are widely believed, make two claims: (1) What each person has most reason to do is what would best fulfil the desires that, at the time of acting, he either has or would have if he knew the facts and was thinking clearly. (2) Desires cannot be intrinsically irrational, or rationally required. These are quite different claims. We could reject (2) and accept a qualified version of (1). We would then be accepting the Critical version of P.

Loc: 2,551 I shall discuss only cases where the Deliberative and Instrumental Theories coincide. These are cases where some person knows all of the the relevant facts, and is thinking clearly. I shall also assume that what would best fulfil this person's present desires is the same as what this person most wants, all things considered. And I shall often assume that this person's desires do not conflict either with his moral beliefs, or with his other values and ideals. By making these assumptions I avoid considering several important questions. These questions must be answered by any complete theory about rationality. But they are not relevant to my main aim in Part Two of this book. This aim is to show that we should reject the Self-interest Theory.

Loc: 2,596 CP's other distinctive claim is that some desires, or sets of desires, are intrinsically irrational. I wrote above that, in most cases, my reason for acting is not one of my desires, but the respect in which what I desire is worth desiring. This naturally suggests how some desires might be intrinsically irrational. We can claim: ‘It is irrational to desire something that is in no respect worth desiring. It is even more irrational to desire something that is worth not desiring—worth avoiding.'

Not worth desiring based on what? vs deliberative P where you should follow the aims as if you knew the relevant facts and were thinking clearly vs instrumental P where you should just follow your current aims

Loc: 2,688 An argument for P may force S to retreat to weaker claims. The gravity of threats to S thus depend on two things: how strong the arguments are, and how far, if they succeed, they will force S to retreat. The most ambitious threat would be an argument that showed that, whenever S conflicts with P, we have no reason to follow S. We have no reason to act in our own interests if this would frustrate what, at the time of acting, knowing the facts and thinking clearly, we most want or value. This would be, for S, complete defeat.

Loc: 2,694 the Self-interest Theory lies between morality and the Present-aim Theory. It therefore faces a classic danger: war on two fronts. While it might survive attack from only one direction, it may be unable to survive a double attack. I believe that this is so. Many writers argue that morality provides the best or strongest reasons for acting. In rejecting these arguments, a Self-interest Theorist makes assumptions which can be turned against him by a Present-aim Theorist. And his replies to the Present-aim Theorist, if they are valid, undermine his rejection of morality. Let us say that, in our view, a theory survives if we believe that it is rational to act upon it. A theory wins if it is the sole survivor. We shall then believe that it is irrational not to act upon this theory. If a theory does not win, having to acknowledge undefeated rivals, it must qualify its claims.

Loc: 2,709 One mistake is to assume that the Self-interest and Present-aim Theories always coincide. No one assumes this in the case of the Instrumental version of P. What people actually want is too often grossly against their interests. But it is widely assumed that the Deliberative version of P coincides with S. It is widely assumed that what each person would most want, if he really knew the facts and was thinking clearly, would be to do whatever would be best for him, or would best promote his own long-term self-interest. This assumption is called Psychological Egoism. […] Most of us, most of the time, strongly want to act in our own interests. But there are many cases where this is not someone's strongest desire, or where, even if it is, it is outweighed by several other desires. There are many cases where this is true even of someone who knows the relevant facts and is thinking clearly. This is so, for instance, when the Present-aim Theory supports morality in a conflict with the Self-interest Theory. What someone most wants may be to do his duty, even though he knows that this will be against his interests. (Remember that, for simplicity, we are considering cases where what someone most wants, all things considered, is the same as what would best fulfil his present desires.)


Loc: 2,731 How common the cases are partly depends upon our theory about self-interest. As I claim in Appendix C, these cases are more common on the Hedonistic Theory, less common on the Success Theory. The cases may be rarest on the Unrestricted Desire-Fulfilment Theory. On this theory, the fulfilment of any of my desires counts directly as being in my interests. What is most in my interests is what would best fulfil, or enable me to fulfil, all of my desires throughout my whole life. Will this always be the same as what would best fulfil my present desires, if I knew the truth and was thinking clearly? There may be some people for whom these two would always coincide. But there are many others for whom these two often conflict. In the lives of these people, S often conflicts with P, even if S assumes the Unrestricted Desire-Fulfilment Theory. S and P conflict because these people's strongest desires are not the same throughout their lives.

Short term I have a strong desire to eat the cake. Following S I refrain from the cake since it might affect my health 10 years from now. Following P I may still have a desire to live a long and healthy life, so the decision may be the same in this case. A slightly different case might be a steak, maybe I believe I have a 30% chance of becoming vegetarian. Following S I might decide to not eat meat, since I feel the guilt I would experience in the future would far outway the pleasure now. According to P however, I have no desire now to not eat meat, and would not consider any future desires, therefore I would eat the steak. Would we then trade small pleasure now for great suffering later? Following P we might still care about our future self, and past self. We are very similar people after all, and would want to avoid unecessary suffering for future selves? This sort of gives us future discounting, since we care less about others than ourselves, but not zero.

Loc: 2,744 S and P are simply related: they are both theories about rationality. S stands in a subtler relation to morality. A moral theory asks, not ‘What is rational?', but ‘What is right?' Sidgwick thought that these two questions were, in the end, the same, since they were both about what we had most reason to do. This is why he called Egoism one of the ‘Methods of Ethics'. A century later, these two questions seem further apart. We have expelled Egoism from Ethics, and we now doubt that acting morally is ‘required by Reason'. Morality and the Self-interest Theory still conflict. There are many cases where it would be better for someone if he acted wrongly. In such cases we must decide what to do, we must decide between morality and S.

The big question

Loc: 2,777 Before I start to criticize S, I shall make one general point. Some of my claims may seem implausible. This is what we should expect, even if these claims are correct. The Self-interest Theory has long been dominant. It has been assumed, for more than two millennia, that it is irrational for anyone to do what he knows will be worse for himself. Christians have assumed this since, if Christianity is true, morality and self-interest coincide. If wrongdoers know that they will go to Hell, each will know that, in acting wrongly, he is doing what will be worse for himself. Christians have been glad to appeal to the Self-interest Theory, since on their assumptions S implies that knaves are fools. Similar remarks apply to Moslems, many Buddhists, and Hindus. Since S has been taught for more than two millennia, we must expect to find some echo in our intuitions. S cannot be justified simply by an appeal to the intuitions that its teaching may have produced.

!!! Christianity uses the incentive of hell to make S align with M

Loc: 2,900 (S7) The Self-interest Theory need not assume that the bias in one's own favour is supremely rational. There is a different reply to your First Argument. The force of any reason extends over time. You will have reasons later to try to fulfil your future desires. Since you will have these reasons, you have these reasons now. This is why you should reject the Present-aim Theory, which tells you to try to fulfil only your present desires. What you have most reason to do is what will best fulfil, or enable you to fulfil, all of your desires throughout your life. This is the S-Theorist's Second Reply.

Loc: 2,960 Another pure theory is the Present-aim Theory, which rejects the requirements both of personal and of temporal neutrality. The Self-interest Theory is not pure. It is a hybrid theory. S rejects the requirement of personal neutrality, but requires temporal neutrality. S allows the agent to single out himself, but insists that he may not single out the time of acting. He must not give special weight to what he now wants or values. He must give equal weight to all the parts of his life, or to what he wants or values at all times. Sidgwick may have seen that, as a hybrid, S can be charged with a kind of inconsistency. If the agent has a special status, why deny this status to the time of acting? We can object to S that it is incompletely relative.

It also runs into issues like the pure 'C', how to compare differing desires from different agents (or for S, points in time). How to choose between my desire today to my desire tomorrow, if they conflict?

Loc: 3,043 As a hybrid, S can be attacked from both directions. And what S claims against one rival may be turned against it by the other. In rejecting Neutralism, a Self-interest Theorist must claim that a reason may have force only for the agent. But the grounds for this claim support a further claim. If a reason can have force only for the agent, it can have force for the agent only at the time of acting. The Self-interest Theorist must reject this claim. He must attack the notion of a time-relative reason. But arguments to show that reasons must be temporally neutral, thus refuting the Present-aim Theory, may also show that reasons must be neutral between different people, thus refuting the Self-interest Theory.

Loc: 3,054 As I shall argue in the next chapter, it could be true that I once had a reason to promote some aim, without its being true that I have this reason now. And it could be true that I shall have a reason to promote some aim, without its being true that I have this reason now. Since these could both be true, it cannot be claimed that the force of any reason extends over time. This undermines the S-Theorist's Second Reply.

Yes, S and described here does seem undermined by this. Do 'egoists' actually belive S however, it seems that they really follow something closer to P, but without defining it carefully?

Loc: 3,127 In this chapter I have argued that, if a reason can have force only for one person, a reason can have force for a person only at one time. We should reject the claim that any reason's force extends over time. We should therefore reject the S-Theorist's Second Reply to my First Argument, which appeals to this claim. If he has no other reply, the S-Theorist may have to return to his First Reply. He may have to claim that the bias in one's own favour is supremely rational. We should reject this claim. If the S-Theorist has no other reply, we should reject S.

Loc: 3,132 Appeal to Full Relativity. According to this appeal, the only tenable theories are morality and the Present-aim Theory, for only these give to I and ‘now' the same treatment.


Loc: 3,142 THE Self-interest Theory claims that, in our concern about our own self-interest, we should be temporally neutral. As I have said, a Present-aim Theorist can also make this claim. I shall now ask whether this claim is justified. If the answer is Yes, this is no objection to P. But, if the answer is No, this is another objection to S.

Loc: 3,146 Consider first those S-Theorists who accept the Desire-Fulfilment Theory about self-interest. On all versions of this theory, what is best for someone is what will best fulfil his desires, throughout his life. And the fulfilment of someone's desires is good for him, and their non-fulfilment bad for him, even if this person never knows whether these desires have been fulfilled. In deciding what would best fulfil my desires, we must try to predict the desires that I would have, if my life went in the different ways that it might go. The fulfilment of a desire counts for more if the desire is stronger. Should it also count for more if I have it for a longer time? In the case of strong desires, it seems plausible to answer Yes; but in the case of weak desires the answer is less clear.

Loc: 3,167 On the Desire-Fulfilment Theory, Sidgwick's axiom of Rational Benevolence becomes (RB1) What each person has most reason to do is what will best fulfil everyone's desires, and the Self-interest Theory becomes (S11) What each person has most reason to do is what will best fulfil, or enable him to fulfil, all of his own desires. A Self-interest Theorist must reject (RB1). As I remarked, he may claim (S10) I can rationally ignore desires that are not mine. A Present-aim Theorist could add (P5) I can rationally now ignore desires that are not mine now. (I write ‘could' because (P5) might be rejected on the Critical version of P.) According to the Appeal to Full Relativity, if we accept (S10) we should also accept (P5). Since an S-Theorist must accept (S10), but cannot accept (P5), he must reject the Appeal to Full Relativity. This appeal claims that reasons can be relative, not only to particular people, but also to particular times. The S-Theorist might reply that, while there is great rational significance in the question who has some desire, there is no such significance in the question when the desire is had. Is this so? Should I try to fulfil my past desires?

Loc: 3,223 if I now ceased to contribute. He must then claim that it would be irrational for me to do so. It would be irrational to cease to contribute even though I do not now have, and shall never later have, any desire to contribute. This conclusion may embarrass the Self-interest Theorist. He may be tempted to concede that a rational agent can ignore his past desires. But, if the ground for this claim is that these desires are past, this may be a damaging concession. The S-Theorist must then drop the claim that it cannot be rationally significant when some desire is had. But he must still claim that a rational agent should give equal weight to his present and his future desires. And if this claim cannot be supported by an appeal to temporal neutrality, it may be harder to defend.

Loc: 3,271 On the Self-interest Theory, this young man must give the same weight to his present and his predicted future values and ideals. This would be giving the same weight to what he now believes to be justified and what he now believes to be worthless or contemptible. This is clearly irrational. It may even be logically impossible.

Loc: 3,274 According to S, I should give equal weight to all of my present and future desires. This claim applies even to those future desires that will depend on a change in my value-judgements or ideals. When it is applied to these desires, this claim is indefensible. In the case of reasons for acting that are based on value-judgements, or ideals, a rational agent must give priority to the values or ideals that he now accepts. In the case of these reasons, the correct theory is not S but P.

Ok. Agree

Loc: 3,277 There are further grounds for this conclusion. Suppose that I believe that, with increasing knowledge and experience, I shall grow wiser. On this assumption, I should give to my future evaluative desires more weight than I give to my present evaluative desires, since my future desires will be better justified. This claim may seem to conflict with P. But this is not so. If I both assume that I am always growing wiser, and can now predict some particular future change of mind, I have in effect already changed my mind. If I now believe that some later belief will be better justified, I should have this belief now. So the assumption that I am growing wiser provides no objection to P. Even on this assumption, I can still give a special status to what I now believe to be justified.

Yes! Similar to Aumann agreement theorum

Loc: 3,298 Once again, S occupies an indefensible mid-way position. If the sceptical argument succeeds, Neutrality wins. Like Hare's ‘liberal', I should give equal weight to the values and ideals of every well-informed and rational person. Suppose that this argument fails. If I should give more weight to my values and ideals, I should also give more weight now to what I value or believe now. The argument for the former claim, when carried through, justifies the latter.

Loc: 3,337 If the S-Theorist abandons the appeal to temporal neutrality, he must abandon what I called his Second Reply. On his new view, I did have reasons to try to fulfil my past desires; but, because I no longer have these desires, I have no reason to try to fulfil them now. If this is so, he must drop his claim that the force of any reason extends over time.

Loc: 3,408 In an experiment, someone must decide whether to endure some pain for the sake of some pleasure. This person knows that, when he has made his decision, he will take a pill that will cause him to forget this decision. This makes irrelevant the pleasures or pains of anticipation. This person also knows that we shall not tell him about the timing of this pain and this pleasure until just before he makes his decision. We describe carefully what the two experiences would involve. So that he can make a fully informed decision, this person imagines as vividly as he can what it would be like to endure this pain and enjoy this pleasure. We then tell him that the pain would be immediate and the pleasure would be postponed for a year. Would the pleasure now seem to him less vivid? At least in my own case, I am sure that it would not. Suppose that, if the pain would be immediate and the pleasure postponed for a year, this person has a mild preference for having neither. He decides that this pleasure is not quite great enough to make this pain worth enduring. We then tell him that we were misinformed: the pleasure would be immediate, and the pain postponed for a year. I think it likely that this person's preference would now change. He might now decide that it is worth enduring this pain for the sake of having this pleasure. Since this person imagined these two experiences when he did not know about their timing, such a change in his preference would not be produced by the alleged fact that later experiences always seem, in imagination, less vivid. We would have good reason to believe that this person is biased towards the near, and in a way that survives ideal deliberation.

Loc: 3,530 Case Two. When I wake up, I do remember a long period of suffering yesterday. But I cannot remember how long the period was. I ask my nurse whether my operation is completed, or whether further surgery needs to be done. As before, she knows the facts about two patients, but she cannot remember which I am. If I am the first patient, I had five hours of pain yesterday, and my operation is over. If I am the second patient, I had two hours of pain yesterday, and I shall have another hour of pain later today. In Case Two there is no amnesia; but this makes no difference. Either I suffered for five hours and have no more pain to come, or I suffered for two hours and have another hour of pain to come. I would again prefer the first to be true. I would prefer my life to contain more hours of pain, if that means that none of this pain is still to come.

Loc: 3,542 Is this preference irrational? Most of us would answer No. If he accepts this answer, the S-Theorist must abandon his claim that the question ‘When?' has no rational significance. He cannot claim that a mere difference in the timing of a pain, or in its relation to the present moment, ‘is not in itself a rational ground for having more or less regard for it'.

Loc: 3,577 Grief is not irrational simply because it brings unhappiness. To the claim ‘Your sorrow is fruitless', Hume replied, ‘Very true, and for that very reason I am sorry'.

Loc: 3,704 Let us first consider the argument with which Epicurus claimed that our future non-existence cannot be something to regret. We do not regret our past non-existence. Since this is so, why should we regret our future non-existence? If we regard one with equanimity, should we not extend this attitude to the other?

Loc: 3,709 Epicurus's argument fails for a different reason: we are biased towards the future. Because we have this bias, the bare knowledge that we once suffered may not now disturb us. But our equanimity does not show that our past suffering was not bad. The same could be true of our past non-existence. Epicurus's argument therefore has force only for those people who both lack the bias towards the future, and do not regret their past non-existence. Since there are no such people, the argument has force for no one.

Loc: 3,750 Return to my main question. Are these attitudes to time irrational? Most of us believe that the bias towards the future is not irrational. We are inclined to believe that it would be irrational to lack this bias. Thus we may be wholly unconvinced by the reasoning I gave in the case just imagined, where we are temporally neutral and shall die tomorrow. We can describe someone who does not much mind the prospect of death tomorrow, because he can now look backward to his whole life. But this attitude, though describable, may seem crazy, or to involve an absurd mistake.

Loc: 3,856 My examples reveal a surprising asymmetry in our concern about our own and other people's pasts. I would not be distressed at all if I was reminded that I myself once had to endure several months of suffering. But I would be greatly distressed if I learnt that, before she died, my mother had to endure such an ordeal.

Huh, interesting

Loc: 3,904 An S-Theorist cannot plausibly claim that this asymmetry is rationally required. In particular, he cannot plausibly appeal here to time's passage. If time's passage justifies my complete indifference to my own past suffering, or even makes this indifference a rational requirement, the S-Theorist must claim the same about my concern for those I love. It is as much true, in the imagined case of my dead mother, that her suffering is in the past.

Loc: 3,939 It was here that temporal neutrality seemed least plausible. How can it be irrational to mind my agony more when I am in agony? The S-Theorist might say: ‘In one sense, this is not irrational. Agony is bad only because of how much you mind it at the time. But, in another sense, you should not be biased towards the present. It would be irrational to let such a bias influence your decisions. Though you mind the agony more at the time, you should not, because of this, end your present agony at the foreseen cost of greater agony later. At the first-order level, you mind the agony more while you are feeling it. But you should not be more concerned about its being present rather than in the future. At the second-order level, where you make decisions that affect the length and the timing of your suffering, you can and should be temporally neutral.'


Loc: 3,998 The best version of the Present-aim Theory is the Critical version. As I wrote, CP can claim that we are rationally required to care about our own self-interest, and in a temporally neutral way. On this version of CP, Proximus is irrational, since it is irrational to be biased towards the near. If we believe that Proximus is irrational, this is no reason to accept S rather than this version of CP.

Loc: 4,116 Remember finally that all possible theories about rationality are versions of CP. Because this is true, we should accept CP whatever we believe. This feature of CP may seem to be a weakness, making it a vacuous theory. But this feature is a strength. We can see more clearly what is assumed by different theories when they are restated as versions of CP.


Loc: 4,188 If we believe that my Replica is not me, it is natural to assume that my prospect, on the Branch Line, is almost as bad as ordinary death. I shall deny this assumption. As I shall argue later, being destroyed and Replicated is about as good as ordinary survival. I can best defend this claim, and the wider view of which it is part, after discussing the past debate about personal identity.

Loc: 4,192 There are two kinds of sameness, or identity. I and my Replica are qualitatively identical, or exactly alike. But we may not be numerically identical, or one and the same person. Similarly, two white billiard balls are not numerically but may be qualitatively identical. If I paint one of these balls red, it will cease to be qualitatively identical with itself as it was. But the red ball that I later see and the white ball that I painted red are numerically identical. They are one and the same ball. We might say, of someone, ‘After his accident, he is no longer the same person'. This is a claim about both kinds of identity. We claim that he, the same person, is not now the same person. This is not a contradiction. We merely mean that this person's character has changed. This numerically identical person is now qualitatively different. When we are concerned about our future, it is our numerical identity that we are concerned about. I may believe that, after my marriage, I shall not be the same person. But this does not make marriage death. However much I change, I shall still be alive if there will be some person living who will be me.

Loc: 4,218 In the simplest case of physical continuity, like that of the Pyramids, an apparently static object continues to exist. In another simple case, like that of the Moon, an object moves in a regular way. Many objects move in less regular ways, but they still trace physically continuous spatio-temporal paths. Suppose that the billiard ball that I painted red is the same as the white ball with which last year I made a winning shot. On the standard view, this is true only if this ball traced such a continuous path. It must be true (1) that there is a line through space and time, starting where the white ball rested before I made my winning shot, and ending where the red ball now is, (2) that at every point on this line there was a billiard ball, and (3) that the existence of a ball at each point on this line was in part caused by the existence of a ball at the immediately preceding point.

Loc: 4,234 Another complication again concerns the relation between a complex thing and the various parts of which it is composed. It is true of some of these things, though not true of all, that their continued existence need not involve the continued existence of their components. Suppose that a wooden ship is repaired from time to time while it is floating in harbour, and that after fifty years it contains none of the bits of wood out of which it was first built. It is still one and the same ship, because, as a ship, it has displayed throughout these fifty years full physical continuity. This is so despite the fact that it is now composed of quite different bits of wood. These bits of wood might be qualitatively identical to the original bits, but they are not one and the same bits. Something similar is partly true of a human body. With the exception of some brain cells, the cells in our bodies are replaced with new cells several times in our lives.

Loc: 4,293 Strong connectedness is not a transitive relation. I am now strongly connected to myself yesterday, when I was strongly connected to myself two days ago, when I was strongly connected to myself three days ago, and so on. It does not follow that I am now strongly connected to myself twenty years ago. And this is not true. Between me now and myself twenty years ago there are many fewer than the number of direct psychological connections that hold over any day in the lives of nearly all adults. For example, while most adults have many memories of experiences that they had in the previous day, I have few memories of experiences that I had on any day twenty years ago. By ‘the criterion of personal identity over time' I mean what this identity necessarily involves or consists in. Because identity is a transitive relation, the criterion of identity must also be a transitive relation. Since strong connectedness is not transitive, it cannot be the criterion of identity. And I have just described a case in which this is clear. I am the same person as myself twenty years ago, though I am not now strongly connected to myself then.

Loc: 4,445 When we ask an empty question, there is only one fact or outcome that we are considering. Different answers to our question are merely different descriptions of this fact or outcome. This is why, without answering this empty question, we can know everything that there is to know. In my example we can ask, ‘Is this the very same club, or is it merely another club, that is exactly similar?' But these are not here two different possibilities, one of which must be true.

Loc: 4,502 I shall also argue for the following conclusions: (1) We are not separately existing entities, apart from our brains and bodies, and various interrelated physical and mental events. Our existence just involves the existence of our brains and bodies, and the doing of our deeds, and the thinking of our thoughts, and the occurrence of certain other physical and mental events. Our identity over time just involves (a) Relation R—psychological connectedness and/or psychological continuity—with the right kind of cause, provided (b) that this relation does not take a ‘branching' form, holding between one person and two different future people. (2) It is not true that our identity is always determinate. I can always ask, ‘Am I about to die?' But it is not true that, in every case, this question must have an answer, which must be either Yes or No. In some cases this would be an empty question. (3) There are two unities to be explained: the unity of consciousness at any time, and the unity of a whole life. These two unities cannot be explained by claiming that different experiences are had by the same person. These unities must be explained by describing the relations between these many experiences, and their relations to this person's brain. And we can refer to these experiences, and fully describe the relations between them, without claiming that these experiences are had by a person. (4) Personal identity is not what matters. What fundamentally matters is Relation R, with any cause. This relation is what matters even when, as in a case where one person is R-related to two other people, Relation R does not provide personal identity. Two other relations may have some slight importance: physical continuity, and physical similarity. (In the case of some people, such as those who are very beautiful, physical similarity may have great importance.)

Loc: 4,518 I shall then try to show that, even if we are not aware of this, we are naturally inclined to believe that our identity must always be determinate. We are inclined to believe, strongly, that this must be so. I shall next argue that this natural belief cannot be true unless we are separately existing entities. I shall then argue for conclusion (1), that we are not such entities. And I shall argue that, because (1) is true, so are my other three conclusions.


Loc: 4,638 This conclusion is not, as some write, crudely verificationist. I am not assuming that only what we could know could ever be true. My remarks make a different assumption. I am discussing a general claim about the existence of a particular kind of thing. This is claimed to be a separately existing entity, distinct from our brains and bodies. I claim that, if we have no reasons to believe that such entities exist, we should reject this belief. I do not, like verificationists, claim that this belief is senseless. My claim is merely like the claim that, since we have no reason to believe that water-nymphs or unicorns exist, we should reject these beliefs.

Loc: 4,799 In this revised form, the argument suspiciously resembles those that are involved in the Sorites Problem, or the Paradox of the Heap. We are led there, by what seem innocent steps, to absurd conclusions. Perhaps the same is happening here. Suppose we claim that the removal of a single grain cannot change a heap of sand into something that is not a heap. Someone starts with a heap of sand, which he removes grain by grain. Our claim forces us to admit that, after every change, we still have a heap, even when the number of grains becomes three, two, and one. But we know that we have reached a false conclusion. One grain is not a heap. In your appeal to the Psychological Spectrum, you claim that no small change could cause you to cease to exist. By making enough small changes, the surgeon could cause the resulting person to be in no way psychologically connected with you. The argument forced you to conclude that the resulting person would be you. This conclusion may be just as false as the conclusion about the grain of sand.

Loc: 4,819 The argument assumes that, in each of these cases, the resulting person either would or would not be me. This is not so. The resulting person would be me in the first few cases. In the last case he would not be me. In many of the intervening cases, neither answer would be true. I can always ask, ‘Am I about to die? Will there be some person living who will be me?' But, in the cases in the middle of this Spectrum, there is no answer to this question. Though there is no answer to this question, I could know exactly what will happen. This question is, here, empty. In each of these cases I could know to what degree I would be psychologically connected with the resulting person. And I could know which particular connections would or would not hold. If I knew these facts, I would know everything. I can still ask whether the resulting person would be me, or would merely be someone else who is partly like me. In some cases, these are two different possibilities, one of which must be true. But, in these cases, these are not two different possibilities. They are merely two descriptions of the very same course of events. These remarks are analogous to remarks that we accept when applied to heaps. We do not believe that any collection of sand must either be, or not be, a heap. We know that there are borderline cases, where there is no obvious answer to the question ‘Is this still a heap?' But we do not believe that, in these cases, there must be an answer, which must be either Yes or No. We believe that, in these cases, this is an empty question. Even without answering the question, we know everything.

Loc: 5,030 If we return to Simple Teletransportation, where there is no overlap between my life and that of my Replica, things are different. We could say here that my Replica will be me, or we could instead say that he will merely be someone else who is exactly like me. But we should not regard these as competing hypotheses about what will happen. For these to be competing hypotheses, my continued existence must involve a further fact. If my continued existence merely involves physical and psychological continuity, we know just what happens in this case. There will be some future person who will be physically exactly like me, and who will be fully psychologically continuous with me. This psychological continuity will have a reliable cause, the transmission of my blueprint. But this continuity will not have its normal cause, since this future person will not be physically continuous with me. This is a full description of the facts. There is no further fact about which we are ignorant. If personal identity does not involve a further fact, we should not believe that there are here two different possibilities: that my Replica will be me, or that he will be someone else who is merely like me. What could make these different possibilities? In what could the difference consist?

Loc: 5,288 It may help to state, in advance, what I believe this case to show. It provides a further argument against the view that we are separately existing entities. But the main conclusion to be drawn is that personal identity is not what matters.

Loc: 5,381 On the Reductionist View, the problem disappears. On this view, the claims that I have discussed do not describe different possibilities, any of which might be true, and one of which must be true. These claims are merely different descriptions of the same outcome. We know what this outcome is. There will be two future people, each of whom will have the body of one of my brothers, and will be fully psychologically continuous with me, because he has half of my brain. Knowing this, we know everything. I may ask, ‘But shall I be one of these two people, or the other, or neither?' But I should regard this as an empty question. Here is a similar question. In 1881 the French Socialist Party split. What happened? Did the French Socialist Party cease to exist, or did it continue to exist as one or other of the two new Parties? Given certain further details, this would be an empty question. Even if we have no answer to this question, we could know just what happened.

Loc: 5,426 Nothing is missing. What is wrong can only be the duplication. Suppose that I accept this, but still regard division as being nearly as bad as death. My reaction is now indefensible. I am like someone who, when told of a drug that could double his years of life, regards the taking of this drug as death. The only difference in the case of division is that the extra years are to run concurrently. This is an interesting difference; but it cannot mean that there are no years to run. We might say: ‘You will lose your identity. But there are different ways of doing this. Dying is one, dividing is another. To regard these as the same is to confuse two with zero. Double survival is not the same as ordinary survival. But this does not make it death. It is even less like death.' The problem with double survival is that it does not fit the logic of identity. Like several Reductionists, I claim Relation R is what matters. R is psychological connectedness and/or psychological continuity, with the right kind of cause.

Loc: 5,824 Nagel once claimed that it is psychologically impossible to believe the Reductionist View. Buddha claimed that, though this is very hard, it is possible. I find Buddha's claim to be true. After reviewing my arguments, I find that, at the reflective or intellectual level, though it is very hard to believe the Reductionist View, this is possible. My remaining doubts or fears seem to me irrational. Since I can believe this view, I assume that others can do so too. We can believe the truth about ourselves.


Loc: 5,833 Is the truth depressing? Some may find it so. But I find it liberating, and consoling. When I believed that my existence was a such a further fact, I seemed imprisoned in myself. My life seemed like a glass tunnel, through which I was moving faster every year, and at the end of which there was darkness. When I changed my view, the walls of my glass tunnel disappeared. I now live in the open air. There is still a difference between my life and the lives of other people. But the difference is less. Other people are closer. I am less concerned about the rest of my own life, and more concerned about the lives of others. When I believed the Non-Reductionist View, I also cared more about my inevitable death. After my death, there will no one living who will be me. I can now redescribe this fact. Though there will later be many experiences, none of these experiences will be connected to my present experiences by chains of such direct connections as those involved in experience-memory, or in the carrying out of an earlier intention. Some of these future experiences may be related to my present experiences in less direct ways. There will later be some memories about my life. And there may later be thoughts that are influenced by mine, or things done as the result of my advice. My death will break the more direct relations between my present experiences and future experiences, but it will not break various other relations. This is all there is to the fact that there will be no one living who will be me. Now that I have seen this, my death seems to me less bad.

Loc: 5,998 If the overlap was large, this would make a difference. Suppose that I am an old man, who is about to die. I shall be outlived by someone who was once a Replica of me. When this person started to exist forty years ago, he was psychologically continuous with me as I was then. He has since lived his own life for forty years. I agree that my relation to this Replica, though better than ordinary death, is not nearly as good as ordinary survival. But this relation would be about as good if my Replica would be psychologically continuous with me as I was ten days or ten minutes ago. As Nozick argues, overlaps as brief as this cannot be rationally thought to have much significance.56

Loc: 6,092 Williams considers a case in which a person would have many co-existing Replicas. And he suggests a new description of this kind of case. He describes the concept of a person-type. Suppose there is some particular person, Mary Smith. And suppose that the Scanning Replicator produces many Replicas of Mary Smith, as she is at a particular time. These Replicas will all be Mary Smiths. They will be different tokens, or instances, of the same person-type. If such a case occurred, there would be several questions about what matters.

Age of ems?

Loc: 6,350 I shall now turn from imaginary cases to actual lives. I shall claim that, if we change our view about the nature of personal identity, this may alter our beliefs both about what is rational, and about what is morally right or wrong.


Loc: 6,356 Some writers claim that, if the Reductionist View is true, we have no reason to be concerned about our own futures. I call this the Extreme Claim.

Loc: 6,492 Since this relation matters, I claim (C) My concern for my future may correspond to the degree of connectedness between me now and myself in the future. Connectedness is one of the two relations that give me reasons to be specially concerned about my own future. It can be rational to care less, when one of the grounds for caring will hold to a lesser degree. Since connectedness is nearly always weaker over longer periods, I can rationally care less about my further future. This claim defends a new kind of discount rate. This is a discount rate, not with respect to time itself, but with respect to the weakening of one of the two relations which are what fundamentally matter. Unlike a discount rate with respect to time, this new discount rate will seldom apply over the near future. The psychological connections between me now and myself tomorrow are not much closer than the connections between me now and myself next month. And they may not be very much closer than the connections between me now and myself next year. But they are very much closer than the connections between me now and myself in forty years.

Loc: 6,565 Return to my argument against the Self-interest Theory. This argument shows, I believe, that we must reject the Requirement of Equal Concern. According to this requirement, I should now care equally about all the parts of my future. It is irrational to care less about my further future—to have what economists call a discount rate. This may be irrational if I have a discount rate with respect to time. But this is not irrational if I have a discount rate with respect to the degrees of psychological connectedness.


Loc: 6,640 Autonomy does not include the right to impose upon oneself, for no good reason, great harm. We ought to prevent anyone from doing to his future self what it would be wrong to do to other people. Though these claims support paternalism, there remain the well-known objections. It is better if each of us learns from his own mistakes. And it is harder for others to know that these are mistakes.

Loc: 6,714 On the Incompatibilist View, Determinism undermines both free will and desert. On this second view, if it was causally inevitable that I committed my crime, I cannnot deserve to be punished. If it is morally justified to put me in prison, this can only be on utilitarian grounds. One such ground is that my imprisonment may deter others from committing crimes. And, notoriously, it would be irrelevant if I am merely falsely believed to have committed some crime. Since even the guilty do not deserve punishment, it will make no moral difference if I am in fact innocent.

Loc: 6,758 There is here an asymmetry.

Parfit often looks for asymetries in a situation to build his arguments, a useful tool.

Loc: 6,766 The Nineteenth Century Russian. In several years, a young Russian will inherit vast estates. Because he has socialist ideals, he intends, now, to give the land to the peasants. But he knows that in time his ideals may fade. To guard against this possibility, he does two things. He first signs a legal document, which will automatically give away the land, and which can be revoked only with his wife's consent. He then says to his wife, ‘Promise me that, if I ever change my mind, and ask you to revoke this document, you will not consent.' He adds, ‘I regard my ideals as essential to me. If I lose these ideals, I want you to think that I cease to exist. I want you to regard your husband then, not as me, the man who asks you for this promise, but only as his corrupted later self. Promise me that you would not do what he asks.' This plea, using the language of successive selves, seems both understandable and natural. And if this man's wife made this promise, and he did in middle age ask her to revoke the document, she might plausibly regard herself as not released from her commitment. It might seem to her as if she has obligations to two different people.

Loc: 6,929 There are two kinds of distribution: within lives, and between lives. And there are two ways of treating these alike. We can apply distributive principles to both, or to neither. Utilitarians apply them to neither. I suggest that this may be, in part, because they accept the Reductionist View.

Loc: 7,071 Consider the relief of suffering. Suppose that we can help only one of two people. We shall achieve more if we help the first; but it is the second who, in the past, suffered more. Those who believe in equality may decide to help the second person. This will be less effective; so the amount of suffering in the two people's lives will, in sum, be greater; but the amounts in each life will be made more equal. If we accept the Reductionist View, we may decide otherwise. We may decide to do the most we can to relieve suffering. To suggest why, we can vary the example. Suppose that we can help only one of two nations. The one that we can help the most is the one whose history was, in earlier centuries, more fortunate. Most of us would not believe that it could be right to allow mankind to suffer more, so that the suffering was more equally divided between the histories of different nations. In trying to relieve suffering, we do not regard nations as the morally significant units. On the Reductionist View, we compare the lives of people to the histories of nations. We may therefore think the same about them. We may believe that, when we are trying to relieve suffering, neither persons nor lives are the morally significant units. We may again decide to aim for the least possible suffering, whatever its distribution.



Loc: 7,214 THERE is another question about personal identity. Each of us might never have existed. What would have made this true? The answer produces a problem that most of us overlook. One of my aims in Part Four is to discuss this problem. My other aim is to discuss the part of our moral theory in which this problem arises. This is the part that covers how we affect future generations. This is the most important part of our moral theory, since the next few centuries will be the most important in human history.

They will? Maybe the most important centuries will be in another 2,000 years

Loc: 7,317 Most of our moral thinking is about Same People Choices. As I shall argue, such choices are not as numerous as most of us assume. Very many of our choices will in fact have some effect on both the identities and the number of future people. But in most of these cases, because we cannot predict what the particular effects would be, these effects can be morally ignored. We can treat these cases as if they were Same People Choices.

Loc: 7,360 I believe that it is defensible both to claim and to deny that causing to exist can benefit, I shall discuss the implications of both views.

Loc: 7,396 These are Same Number Choices, which affect the identities of future people, but do not affect their number. We might suggest The Same Number Quality Claim, or Q: If in either of two possible outcomes the same number of people would ever live, it would be worse if those who live are worse off, or have a lower quality of life, than those who would have lived. This claim is plausible. And it implies what we believe about the 14-Year-Old Girl. The child that she has now will probably be worse off than a child she could have had later would have been, since this other child would have had a better start in life. If this is true, Q implies that this is the worse of these two outcomes. Q implies that it would have been better if this girl had waited, and had a child later.


Loc: 7,792 I shall later ask how many people there should ever be. In a complete moral theory, we cannot avoid this awesome question. And our answer may have practical implications. It may, for example, affect our view about nuclear weapons. In most of what follows, I discuss a smaller question. How many people should there be, in some country or the world, during a certain period? When would there be too many people living?

Loc: 7,958 Return now to my imagined Z. This imagined population is another Utility Monster. The difference is that the greater sum of happiness comes from a vast increase, not in the quality of one person's life, but in the number of lives lived. And my Utility Monster is neither deeply impossible, nor something that we cannot imagine. We can imagine what it would be for someone's life to be barely worth living. And we can imagine what it would be for there to be many people with such lives. In order to imagine Z, we merely have to imagine that there would be very many. This we can do. So the example cannot be questioned as one that we can hardly understand.

The famouse repugnant conclusion. I'm not sure we really can imagine that. If we imagine with small numbers like 1 somewhat happy person vs 10 with lives barely worth living it seems easy to accept. The issue may be with what each person imagines as 'barely worth living'


Loc: 7,983 WE need a theory that both solves the Non-Identity Problem and avoids the Repugnant Conclusion. As we shall see, several theories achieve one of these aims at the cost of failing to achieve the other.

Loc: 7,993 Some writers claim that, while we would have a duty not to conceive the Wretched Child, my couple have no duty to conceive the Happy Child. It would merely be morally better if they had such a child. Many people would deny even this last claim. These people believe that, while it would be wrong to have the Wretched Child, my couple have no moral reason to have the Happy Child.25 This view has been called the Asymmetry.26 If we accept this view, we have a third aim. Besides solving the Non-Identity Problem and avoiding the Repugnant Conclusion, we must also explain the Asymmetry. What theory would achieve these aims?

The asymmetry sounds like wishful thinking. Morality may just be hard to satisfy.

Loc: 8,212 Other people would accept less extreme versions of this claim. Since we want to avoid the Repugnant Conclusion, it will help to distinguish two groups of answers. These can be introduced with two questions, as shown on the next page.

The difficulty of avoiding the repugnant conclusion makes it seem even more likely to me that the real solution is to just accept it.

Loc: 8,461 We must avoid the Absurd Conclusion. This conclusion followed from the asymmetry in our claims about the value of quantity. We placed a limit on quantity's positive value, within some period, but we placed no limit on its negative value. To avoid the Absurd Conclusion we must abandon this asymmetry. We cannot plausibly place a limit on quantity's negative value. It is always bad if there is more uncompensated suffering, and this badness never declines. We must therefore remove the limit on quantity's positive value.


Loc: 8,541 The Average Principle has other implications which are absurd. Suppose that Eve and Adam lived these wonderful lives. On the Average Principle it would be worse if, not instead but in addition, the billion billion other people lived. This would be worse because it would lower the average quality of life. This way of lowering the average, by Mere Addition, cannot be plausibly claimed to be bad.

Loc: 8,567 A similar claim again applies to the birth of any child. Suppose that, in the further future, the quality of life will for many centuries be extremely high. It is then more likely to be bad if I have a child, even if my child's life would be well worth living, and his existence would be bad for no one. It is more likely that my child's existence would lower the average quality of all future lives. This cannot be relevant. Whether I should have a child cannot depend on what the quality of life will be in the distant future.

Loc: 8,574 Hell Three. Most of us have lives that are much worse than nothing. The exceptions are the sadistic tyrants who make us suffer. The rest of us would kill ourselves if we could; but this is made impossible. The tyrants claim truly that, if we have children, they will make these children suffer slightly less. On the Average Principle, we ought to have these children. This would raise the average quality of life. It is irrelevant that our children's lives would be much worse than nothing. This is another absurd conclusion.

Parfit is looking for a consistant theory that matches our intuitions as closely as possible. This 'overcomes' is/ought by seeing morality more as a tool than as a thing that exists in the world to be discovered.

Loc: 8,721 The argument can be restated. Suppose that we are considering possible states of the world many centuries ago, perhaps in the Ninth Century. There is no ground for fear about future consequences; we know what happened later. Suppose next that A + was the actual state of the world in this past century. We can then ask, would it have been better if the actual state had been A? In asking this, we can suppose that A + did not later change into B. The existence of the worse-off group in A + did not affect the better-off group for the worse. And, since the groups could not communicate, there was no social injustice. Given these facts, was A + worse than A would have been? Was it bad that the worse-off group ever lived?

Loc: 8,958 we need a new theory about beneficence. This must solve the Non-Identity Problem, avoid the Repugnant and Absurd Conclusions, and solve the Mere Addition Paradox. I failed to find a theory that can meet these four requirements. Though I failed to find such a theory, I believe that, if they tried, others could succeed.

Loc: 8,995 Since we should reject S, our theory must be, in one way, more impersonal. It must not claim that each person's supreme concern should be himself; and it must not give supreme importance to the boundaries between lives. But our theory need not be Sidgwick's Principle of Impartial Benevolence. We should accept the Critical Present-aim Theory, or CP. On this theory, the fundamental unit is not the agent throughout his whole life, but the agent at the time of acting. Though CP denies the supreme importance of self-interest, and of a person's whole life, it is not impersonal. CP claims that what it is rational for me to do now depends on what I now want, or value, or believe. This claim gives more importance to each person's particular values or beliefs. Since CP gives more importance to what distinguishes different people, in this different way it is more personal than S.

Loc: 9,045 Ethics asks which outcomes would be good or bad, and which acts would be right or wrong. Meta-Ethics asks what is the meaning of moral language, or the nature of moral reasoning. It also asks whether Ethics can be objective—whether it can make claims that are true.

Loc: 9,136 In the meanwhile, we should conceal this problem from those who will decide whether we increase our use of nuclear energy. These people know that the Risky Policy might cause catastrophes in the further future. It will be better if these people believe, falsely, that the choice of the Risky Policy would be against the interests of the people killed by such a catastrophe. If they have this false belief, they will be more likely to reach the right decision.

I knew it!

Loc: 9,157 There is another ground for doubting Moral Scepticism. We should not assume that the objectivity of Ethics must be all-or-nothing. There may be a part of morality that is objective. In describing this part, our claims may be true. When we consider this part of morality, or these moral questions, we may find the Unified Theory that would remove our disagreements. There may be other questions about which we shall never agree. There may be no true answers to these questions. Since objectivity need not be all-or-nothing, moral sceptics may be partly right. These questions may be subjective. But this need not cast doubt on the Unified Theory.

Back to the index

Last modified 2019-08-16 Fri 16:27. Contact