You’ve been given 10 billion dollars. Well, you’ve been entrusted with 10 billion dollars, and instructed to give it away to the life-saving initiatives of your choice, in the form of 1,000 different 10 million dollar grants. With so many grants to oversee, it would probably make sense to establish some minimum guidelines for what qualifies a project to be considered. As these are supposed to be life-saving grants, the first question you might be inclined to address is this: how many lives must a project promise to save before its eligible for a 10 million dollar grant?
This question is just an abstract way of asking how much money you feel a life is worth. So, if you don’t have an answer to how much money a life is worth, then given your situation as the holder of these grants, this would be a reasonable question to ask. What is the minimum number of lives an initiative must save to warrant a 10 million dollar grant. And wouldn’t it be reasonable to establish one single consistent answer to this question, regardless of how many lives are at risk? But when answering these questions, our minds aren’t always reasonable.
In a study involving a hypothetical grant-funding agency, like the one you were entrusted with running, respondents were asked a version of this same question without stating how many lives were at risk, and then asked a few more times varying that number. And as this number of at-risk people increased, 2/3rds of respondents also increased the threshold of lives saved that warrants a 10 million dollar grant. When told that 15,000 people were at risk, respondents set 9,000 as the minimum, which implicitly valued each life saved at about 1100 dollars. But when the at-risk population increased to 290,000 people, respondents then for some reason drastically altered the value of a saved live down from 1100 to 100 dollars by saying that a 10 million dollar grant was only warranted if it saved 100,000 people.
And the same study explored how willing people would be to send money to a life-saving program in a rwandan refugee camp that would save 4,500 lives. But keep in mind throughout this episode and all other impactivism episodes, that value and willingness to pay is just easily exemplified through dollar amounts, but it applies to all of our resources, including the time we’re willing to spend, our mental capacity and our energy in all forms that we’re willing to give. Anyway, had to mention that quickly, though I hope that was already apparent.
So the study found that as the size of the camp increased, people were less willing to contribute to saving the same 4,500 people. What this would suggest is that if the US or your country went to war and you fled across the canadian border and sought refuge and safety in a displaced persons community, the value of your life according to the flawed perceptions of popular opinion would decrease with every additional person moving into your community, and it would increase every time someone left your community for another. But weather there are 1000 or 100000 people who happen to live nearby, I wouldn’t be wrong to say that you and your family and friends would still be living and breathing, feeling the same pain and pleasure, and still possess the inherent right, as a human being, not to suffer and to live well. Would I?
Unfortunately, Joseph Stalin was right. Well, allegedly. He was accredited with first saying this to the then US Ambassador to the Soviet Union during world war 2. The death of one man is a tragedy, the death of millions is a statistic.
Trying to think about this as simply as possible, that makes sense. A tragedy is usually what we can see, and what we can really comprehend. And its severity swells within our own perception relative to how much we can grasp. If we don’t have the story and the details, a tragedy is not real. And when one person dies, stories can be grasped and told, our minds have the capacity to comprehend the nuance and detail in that one story and process it all emotionally.
But the stories of a millions, that’s incomprehensible even if we tried as there’s not enough time in the day let alone emotional strength to process so much sorrow or understand it. And in the end, trying to do so would just be so self defeating, as there’s a bottomless pit of statistics in the world to itemize into stacks of individual tragedy. But what good does that do? Well, I’d argue is does harm. If we want to do good, and to minimize the tragedies, the goal is not to empathize, but rather to find rational compassion for the issue we can then act upon with that rationality, and in the process, maximize our likelihood of impacting good. If we can take action to alleviate suffering, wouldn’t it be best if we could do this without feeling that pain? Empathy in this sense is not a good thing. But if empathy, feeling someone else’s pain, is necessary to catalyze action, then there may be value. But its the compassion of caring even if we can’t feel their pain, that’s what matters most. And our minds can function at full capacity regardless of how much compassion we hold. But when increase our empathy, at a certain point, there are limited and this pain we feel on behalf of others becomes destructive.
If I close my eyes, I can very easily see a face, and if that face is displaying pain, I can feel that. If try to visualize 10 faces in one frame, I suppose I can, but I can no longer see their eyes or feel their presence so much, and empathy fades. Then 100, I can’t see their faces, but I can imagine a crowd of anonymous bodies huddled together. Then 1000, i have to zoom out so far that the bodies become dots, and its pretty hard to find compassion for a dot. Then beyond that point, when you close your eyes and even the smallest dots bleed together into a blob of unknowable size, the 3000, 10000, a million, a billion. They’re all the same. The many many humans, the many tragedies, they become a statistic. So, if we’re depending on empathy, on feeling someone’s pain, to push us to act, it’s no mystery why single tragedies would spark more empathy and thus more action than other statistics do. But maybe there’s an important distinction to make between empathy and compassion… I can’t possibly empathize with 10,000 people at once, but I can absolutely have compassion for 10,000 people. And if I hold that compassion to a certain standard of rationality by deliberately deciding to expend the mental effort and energy to do so, then I’m forced to reminded myself that impacting 10,000 lives ought to be precisely 10,000 times as valuable as impacting 1 life. And our rational compassion would draw this distinction in order to best seek the most efficient and effective actions, NOT to feel the most pain. But empathy and compassion will be it’s own episode before too long, so I won’t elaborate much further.
Imagine you walk into your room for the night and see 10 pages of text laying on your left nightstand and stack of 10 thousand pages on the right, topped with a neat summary on page 1. Each stack is a story. The 10-page story is readable, consumable and comprehendible, the magnitude and depth we’re used to in short stories. And in those few thousand words, there is room for so much beauty, character development, build up, a climax and a resolution. You could read it all, take it all in now before bed. But the 10-thousand pages, I don’t know about you, but I have too many books on my list to ever start reading something beyond 400 pages, so this is just way too much, it’d intimidating at best, and unachievable at worst, even if I tried. Though I can only imagine how much depth and complexity and intensity and beauty is inside, even if I did read it all the way through, I don’t know if I have the capacity to understand such a complex story that would require 10,000 pages to be told. So, I can read the one page summary at the beginning to get the most basic fundamental gist. But if every page is a story of one, how do you fit 10,000 tragedies into one page? You can’t, so you use numbers with extra zeros, and those extra zeros just don’t tend to make a difference after a certain point.
To highlight how numbed we are to those extra zeros, consider this example. One study asked three groups of people of similar wealth and demographics, what they would donate to save 2000 bird, 20,000 birds, and 200,000 birds from an oil spill. And, sadly, the first group was willing to pay 80 dollars to save 2000 birds, 78 dollars to save 20,000 birds, and 88 dollars to save 200,000 birds. I did mention this study in episode 6 when I talked about moral licensing. So check that out if you get a chance.
Similar experiments showed that Toronto residents would pay little more to clean up all polluted lakes in Ontario than polluted lakes in a particular region of Ontario. And to elaborate, there are 250,000 lakes spread out across Ontario, though I don’t have the exact number within that small region of Ontario cited in the study, but it is very small fraction. And another experiment showed that residents of four western US states would pay a certain amount to protect one single wilderness area, then would only pay 28% more to protect all 57 wilderness areas in those states.
These phenomenons encompass a cognitive bias of ours, a mental imperfection, referred to as scope insensitivity bias, or scope neglect. And this bias often leaves us acting in such a way that suggests 100 deaths or lives saved is less than 100 times as significant as 1. Individual, relatable humans become masses, and masses become abstract, unrelatable statistics.
When considering saving human lives, our minds follow a similar psychological function, they experiences the same cognitive limitations, as they do when processing loudness, processing brightness, and heaviness, as well as wealth. When processing each of these perceptions, we experience a diminishing sensitivity to changes as magnitudes incase. When we think about wealthy people, a billionaire doesn’t seem a thousand times richer than a millionaire, they’re just overall rich extra, and the difference between a 45 pound weight and a 50 pound weight does not feel quite like the difference between 5 and 10 pounds. In the same way, as the numbers of lives lost or lives saved increases, the significance of additional lives atop large numbers, they decrease. We’re just not able to process it in the same way that a small increase in the volume of quiet music while sitting in our living room really stands out, but that same increase in the middle of concert, we’d have no idea.
We’re clearly insensitive to animal lives, as shown in the bird example, and in the way that we get a similar reaction to hearing about Cecile the lion, the one single lion, and the 60 billion farmed animals slaughtered each ear, though that’s the result of a LOT more cognitive biases than just scope insensitivity, but for another time. And we’re clearly insensitive to environmental harm, as shown by the lake example. But you’d think that if anything can counter this bias, it would be human lives, yet, we’ve seen that that’s not the case either.
For one more example related to humans, a study published in the Journal of Experimental Psychology, analyzed how much people are willing to pay into public services to prevent deaths. When the risk of death related to chlorinated drinking water was set at 0.004 per 1,000, or 4 deaths per million people per year, they were willing to pay $3.78 per year to eliminate that risk. When the risk increased to 2.43 per 1,000, or 2400 per million per year, 600 times as many deaths over time, they were willing to pay 15.23 cents to eliminate that risk.
And when this experiment was tweaked, they found that when that factor was brought down from 600 times as many deaths to 10 times as many deaths, there was no willingness to pay any different. Meaning that extra zero meant absolutely nothing, and simply because of the framing and our lack of critical though here, our lack of awareness of our scope insensitive, we valued saving 1 life as exactly as valuable as saving 10 lives.
All these studies may indicate another phenomenon that seems to suggest we have a finite amount of resources or energy to expend on altruistic actions. And as we don’t have unlimited time, energy and resources to give to doing good, sometimes our minds are less interested in purchasing impact with its energy use and resources, but rather purchasing a feeling, that sort of warm glow you get when you estimate that you’ve done your part today, of having successfully lived up to your standard of ethics for this period of time. This is the high of the benefactor. This idea is again pretty entwined with moral licensing, so do listen to that episode if you get a chance, episode 6.
Unfortunately, this warm glow or benefactors high seems to be inherently scope insensitive itself. I’m not suggesting that this is unnatural in any way. Maybe I’m making this podcast because this IS entirely natural, and because of this, it takes deliberate effort, energy and thoughtfulness to get better at something we’re not so great at. I guess this podcast is also highlighting the consequences of not getting better doing good, because the difference of world without so many of these biases leading us to think and act irrational, and a world with them, is the difference between a dysfunctional one where we have the infinite potential to create wellbeing but suffering persists, and one where we live up to our potential. I guess a lot of this just seems to reaffirm how much we value the act of doing good as an end in itself, instead of a means towards the end of impacting change.
In the end, when considering the rationality of your ethical choices and actions, keep the scope in mind, and be sure to avoid designating diminishing value to additional good. Make sure to act in accordance with the belief that your best friend is the exact same person with the same right to life, wellbeing and happiness when their standing alone in a field, or in a field beside 10,000 people. And that you 10 closest friends lives are collectively 10 times as important as one of theirs. And the same logic should apply to every other human and nonhuman animal out there.