Misc: Reinstein notes on population ethics

David Reinstein

"Why do this, why try to generate a best take on 'my welfare function'" without having gone through the literature (much, only listening to podcasts etc)?

  1. Uninformed theorizing may bring a new perspective which would be constrained by reading

  2. Narrating "figuring out things" may bring insights to others (help newbies learn about population ethics, give experts a sense of how an economist would take this on)

Simple example, I argue it is consistent with RW intuitions (expand)

- Prob (1-p) of 'small world S' with NL = 100 million people indexed by i

- Prob p of 'large world L' with NL = 100 billion people indexed by i

As a 'baseline', think of p=1/2 for intuition; assume that if you didn't act, this would be its value.

Ignore the 'non-identity' issue.

Each individual has a value function vi(W),

call it the vector VW across all individuals in world W

For simplicity, maybe we assume that every individual (in a particular world W) gets the same v_i(S), but this can be affected by further arguments, discussed below

For now assume all value/happiness is positive, and/or we have a clear way of weighing negatives and positives (as well as among positive states)

You can 'donate' (or take costly actions) by choosing some vector g1, g2, g3 each of which may potentially affect either:

  1. the values each individual gets (happiness whatever) in state S

  2. the values each individual gets in state L

  3. the probability of a 'large world L', probability p

Justifications

Aside: Why not include 'extinction' or 0 population?

  • I think this conjures non-utilitarian, non 'add up across people' values

  • We already know there will be at least some people ... alive today

I want a 'consequentialist utilitarian welfare function':

I think I should care about outcomes (probability-weighted perhaps) mediated through how the people themselves value their own states. I think I should act so that my actions do best to maximize this. I don't want the 'thing I am maximizing' to depend on my actions or on how I think about my actions or 'why' I chose something.

"Deciding between"

  1. 'Average utilitarian' (or some function like that)

  2. Total utilitarian

  3. Some representation of 'person affecting' (but it's hard to achieve that with a simple SWF

    1. Considering allowing a function of both the VW vector of happinesses and the population in a state VW ... but that does seem a bit of A FUDGE

Simple cases:

I. Suppose you can only affect the probability of a large world, but not welfare, you can affect p

  • Here the 'average utility' model seems appropriate for a person with PAV; you don't care which world exists, so you won't invest in p

  • True, if you 'could' affect happinesses, it does seem weird and unfair that you are valuing each person in the Small world so much more ... but as you can't, this doesn't guide you to an 'unjustifiable decision'

Possible variation: you can affect both p and VS but not VL ... would probably preserve this

II. Suppose you can only affect the happinesses, but not the probability of each state

  • Here the 'total utilitarian' model proscribes the 'right action'. For the same cost, you would improve the life of someone in world S instead of someone in world L, but you would only value this at approximately 1/p = 2x as much in our example ... only because each person in world S is about twice as likely to exist as each person in world L

  • But it feels weird to a PAV, because with this SWF, if you could increase the probability of world L, you would do so. The actual value function puts (in our example) 1000 times more value on world L

"If you can affect both" ... how to reconcile it?

  1. A 'correction term' to either the average or total model that implies that 'increasing p should have little or no value'

  2. A 'value of my actions (and outcomes?)' that does treat the components of my decision (g0, g1, g2) differently insofar as these affect p versus the value functions

Last updated