This note is the seventeenth letter in the 104-days-of-summer-vacation series. You can also follow the full twitter thread here, and leave any thoughts and comments that might come up!
Dearest Reader,
Doing good is good, it makes us happy to be of service to others. So, it could be small or big but I hope that you get the opportunity to help someone else today. The principle of doing good is also the core of Effective Altruism, I wrote about some of their career advice yesterday in do-what-contributes.
In essence, Effective Altruism constructs an optimization problem out of the idea of maximizing the amount of good contributed to the world. Using metrics like QALYs (Quality Adjusted Life Years), the EA community advocates for a rational approach to select and participate in activities that are of highest value to the world.
This sounds perfectly reasonable. When the EA movement first started, it focused on improving living conditions in developing countries (mosquito nets being a prime example), where the impact of an additional dollar of investment has much greater returns as compared to other first-world problems.
Additionally, effective altruists like to work on neglected problems which cause great suffering, where an additional unit of work is more valuable than when allocated to an alternative cause which already has plenty of manpower. This rational, metric-driven approach leads directly to EA’s brand as the evidence-based approach to doing good. The science of living a good life.
Things start to get concerning when we introduce the ideology of longtermism- a belief that the number of current humans is miniscule compared to all the future humans that might live if we did everything right. Longtermism says that we have a moral responsibility to protect and influence the long-term future. In recent years, EA ideologies and longtermism have become somewhat synonymous, with many EAs pivoting from helping developing nations to working on existential risks.
Since this pivot, there have been major changes in Effective Altruism discourse, like this move away from climate change because it doesn’t threaten civilisation. Many EAs now focus on issues like pandemic prevention, biosecurity, and AI alignment due to the potential for civilization ending (x-risk) events.
This feels wrong. Not that AI alignment isn’t important, but bearing the responsibility for of all future humanity can lead to reprehensible moral outcomes. Hal Triedman in Ineffective Altruism cites this interesting thought experiment:
a situation where the head of the CIA explains to the US president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken Bostrom’s argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders.
I think the moral problem manifests when the tools of rationality are applied to the belief of longtermism. Taken alone, the EA approach to rational goodness and the optimistic belief of longtermism are both appealing to me. Put together, the idea that we should disregard impacts on the next 100 years because we are but infinitesimal, inconsequential specks in human history seems ridiculous. In some sense, EA methods applied to longtermism conclude that our primary moral responsibility is to not mess anything up.
Referring to Hal Triedman again,
Ineffective altruism eschews metrics, because “What does doing good look like?” should be a continuously-posed question rather than an optimization problem.
But this rebuttal doesn’t satisfy me, it falls right back into the traps of scope neglect, which is the cognitive bias that rationality was supposed to get us out of. And yet in the words of Philosopher Karl Popper as quoted by Hal Triedman himself:
We must not argue that a certain social situation is a mere means to an end on the grounds that it is merely a transient historical situation. For all situations are transient. Similarly we must not argue that the misery of one generation may be considered as a mere means to the end of securing the lasting happiness of some later generation or generations; and this argument is improved neither by a high degree of promised happiness nor by a large number of generations profiting by it. All generations are transient. All have an equal right to be considered, but our immediate duties are undoubtedly to the present generation and to the next.
If rational longtermism leads to outcomes that I find morally incorrect, there are only three explanations for that.
- The rational inference procedure is wrong (or ill suited for moral questions)
- The prior assumptions of longtermism is wrong
- The outcome is correct and I need to overcome my scope neglect and bite the bullet
I don’t know yet about (1) and (2), but I’m not confident enough to accept (3) just yet.
~ Shan