Subscribe to the New Ideal podcast here.
The collapse of the FTX cryptocurrency exchange in the fall of 2022 put the influence of a previously obscure philosophical movement in the spotlight. The company’s CEO, Sam Bankman-Fried, was a leading advocate for and financial sponsor of the “effective altruist” movement.
Bankman-Fried learned this philosophy directly from a friend, the Oxford philosopher William MacAskill, one of the theory’s most prominent advocates. To be a truly “effective” altruist, MacAskill advised him years ago not to become an activist but a successful businessman who could earn billions to give away for his chosen causes.1
Whatever the role of Bankman-Fried’s philosophy in FTX’s collapse, effective altruism is increasingly influential and warrants scrutiny.2 In essence, effective altruism stresses the importance of using a rational, scientific approach to calculate the most efficient way to help the most people, especially through charitable giving, as opposed to giving your money away for sentimental reasons. It attracts an audience of highly intelligent, scientifically oriented, secular people.
But the more we learn about the causes the effective altruist leadership has embraced, the more we should question the moral foundation of the movement. As we’ll see, this is something that even the movement’s fiercest critics have been unwilling to do.
Longer term than you imagine
Bankman-Fried prioritized giving to causes favored by “longtermism,” a school of thought within effective altruism that orients our decisions toward potential generations in the distant future and urges us to work overtime for their sake.
Just months before the scandal, MacAskill released a major new book, What We Owe the Future, arguing for the longtermist thesis. For longtermists, concern for future generations doesn’t simply mean providing for your children or mentoring the young. It doesn’t even mean concern about what many envision as the catastrophic impact of climate change decades down the road. Longtermists like MacAskill tend to downplay worries about climate change as too short term. Concern for the real long term means thinking centuries or even millennia into a future that no one we love will ever know.
Some of the long-term risks and opportunities that motivate longtermists include encouraging the development of institutions with proper moral and intellectual values, preventing global pandemics and avoiding nuclear war. But what really animates longtermists about entrenched values is the concern that the wrong ones could be locked into the programming of artificial intelligence. As MacAskill explains in his book, they are concerned more generally that AI could someday turn on us, manipulate our values, and even try to exterminate us. Likewise, the pandemics they want us to prepare for are not diseases like Covid that may kill millions, but the much more improbable pandemics that could bring about the extinction of the human race.3
These apocalyptic concerns are accompanied by other perplexing priorities. Because the longtermists (like most effective altruists) adopt the utilitarian philosophy that the “greater good” consists in maximizing the sum total of human happiness, they advocate that more of us should have more children. They urge this not because children bring happiness to the lives of their parents, but ultimately because those children will live to affect the future for the better, and because an increase in the population adds even more to that sum total of happiness.4 (Notably, MacAskill thinks this is true even if we cannot provide well for the added population and their “average” amount of happiness decreases.) Longtermists even emphasize the moral imperative to colonize outer space, as this is seen as necessary for the expansion of human consciousness across the universe.5 Some longtermists seriously urge that we create the preconditions for the evolution of new post-human digital beings whose consciousness will be capable of an unimaginable degree of happiness.6
So, either AI will kill us or it will become like a god and we the humble servants of the greatest happiness must work to ensure the coming of this higher being. Either way, we can’t be so bothered about luxuriating in the present that we keep our eyes off the hell on earth or heaven in the stars that awaits our great-great-great-great-great-great-great-(etc.)-grandchildren. Mostly, longtermists focus on avoiding the coming hell.
And our descendants are many. Central to longtermist calculations is their claim that the total number of future human or digital beings is expected to be uncountably large (MacAskill mentions a figure of 80 trillion future people; Nick Bostrom talks about 1058 of them7). This means the total amount of happiness or suffering we can expect from focusing on the right or wrong priorities is also thereby massive. So even supposing the odds of a robot apocalypse to be tiny, multiplying a small number by an exponentially large number still yields a very large “expected value.” The future, say the longtermists, is large: hence the weight of our moral obligation toward all those future people is heavy.
The ideas at the root of longtermism’s absurdities
Critics of longtermist altruism claim that its obsession with sci-fi hypotheticals is inexcusable when there are real people suffering and dying in our world today.8 To explain longtermism’s detachment, its conventional critics point to everything from the idiosyncratic interests of tech nerds to the bias induced by donors eager to excuse their wealth and power.9 Still others claim the problem isn’t amoralism, but excessive idealism: longtermists are foolishly “hubristic” to think that any vision of the good can be achieved.10
None of these critics seriously consider the intellectual content of the ideal of altruism itself.
We can begin to see why this ideal might be responsible for longtermism’s call for all of us to sacrifice to the distant future by looking to MacAskill’s book.
“Morality, in central part, is about putting ourselves in others’ shoes and treating their interests as we do our own.”11 That’s MacAskill’s foundational assumption about the nature of morality that few of longtermism’s critics would challenge. But if it’s true, where shall we find all the other shoes to put ourselves in? Remember, “the future is big.” If altruism is to be understood in the popular utilitarian way, by looking to consequences of one’s actions for the largest number of people, why wouldn’t one look to the very long-term future where most of the other people will be?
What about the idea that it’s heartless to ignore the others in our very midst? MacAskill handles this question by invoking ideas from the father of effective altruism, Peter Singer. Singer argues that if morality is really about treating others’ interests as we do our own, we should not be any less concerned for strangers halfway around the world than we are for a child drowning in front of us. MacAskill adds: if distance in space from the self is not morally relevant, why should distance in time be any different? The moral concerns of “neartermists” seem too sentimentally connected to the selfish present.12
Another inspiration for the longtermist moral concern with future generations is philosopher Derek Parfit. If Singer is the father of effective altruism, Parfit is the grandfather (he was also MacAskill’s mentor at Oxford).13 It is Parfit who first argued that a utilitarian concerned with the impersonal consequences of our actions should regard distance in time as no more morally significant than distance in space.14 And it is Parfit who famously argued that utilitarian premises would justify increasing the total amount of happiness even if it means making each individual less happy (his infamous “repugnant conclusion”).15 Parfit’s reasoning is then clearly at work in the longtermists’ baby-making project. In his view, a morality concerned with maximizing the total amount of happiness doesn’t only aim to make people happy, it aims to make happy people.16
Why should we care about maximizing across all time the total number of happy people, anyway? Parfit’s thought supplies an answer that reveals the core assumptions of altruism, “effective” or otherwise. He famously challenged the idea that the individual has a stable personal identity over the course of his life. Prudent planning and working for our futures is really just sacrificing for the sake of our “future self,” a totally different person. If we have any reason to work for “our” future happiness, it is not based on the interest of an enduring self. But then our reason to sacrifice for other people is not substantially different from our reason to serve our future self. All that’s left to matter is maximizing the impersonal total quantity of happiness.
Although they’re criticized for reaching absurd conclusions, longtermists like MacAskill and Bankman-Fried are just following the logic of the path that’s been laid down by eminent moral philosophers. To simplify that path: morality is really about caring more about others than about yourself, since there are so many more others than you who need your equal attention. And the best way to be concerned with the most others who are far away from your rather insignificant, fragmented self is to concern yourself with the vast future beyond your interests.
If longtermism is absurd, but the longtermists are just following the logic of the altruist moral ideal, we should take seriously that this ideal is the real source of longtermist absurdities.
'If longtermism is absurd, but the longtermists are just following the logic of the altruist moral ideal, we should take seriously that this ideal is the real source of longtermist absurdities.' Share on XThe fundamentals derive from faith
Effective altruism has long portrayed itself as a rational alternative to an irrational, emotionalist approach to charity. Yet its critics have noticed that its doctrine (and the sociology of those who practice it) have the trappings of a religion. Working overtime to make and sacrifice one’s earnings while also churning out babies all for the sake of appeasing and facilitating the creation of higher digital beings in a far-off future sure does sound like joining a religious cult.17
Why would altruists so obsessed with calculation of probabilities, who follow the logic of altruism to its ultimate implications, behave like members of a cult? The answer can be found in the source of the premises from which the longtermists have deduced their conclusions.
In the concluding chapter of Reasons and Persons, Parfit claims religious ethics has “prevented the free development of moral reasoning.”18 Yet, in the final appendix of his book, he tells the reader about how the foundational assumption of his ethics, about the non-existence of an enduring self, can be found in the works of the Buddha: “The mental and the material are really here, But here there is no human being to be found. For it is void and merely fashioned like a doll, Just suffering piled up like grass and sticks.”19
Parfit’s willingness to point to religious texts in support of the foundations of his philosophy should call into question the sincerity with which he claims to be working to build a non-religious ethics. The same goes for many of the philosophers who follow in his tradition.
At first Singer’s methodology might seem more rational than other approaches to ethics. He along with other effective altruists makes much of the fact that too many people rely on sentimentality when deciding on a charitable cause. He even claims there are evolutionary reasons to discount the reliability of the moral “intuitions” that make us care more about the drowning child in front of us than about the starving child abroad.20
But this doesn’t stop Singer from basing his argument on our “intuitive” response to the drowning child, claiming that if it suggests we should help the child, we should also help multitudes of others. And while he is willing to challenge the cognitive provenance of our “intuitive” responses to concrete cases, he suggests that our “intuitive” response to highly abstract principles reveals “propositions of real clearness and certainty.” He claims that the proposition “each one is morally bound to regard the good of any other individual as much as his own” is an intuitively obvious axiom (citing the 19th-century British utilitarian Henry Sidgwick, and Parfit).21
But as Singer must know, such an axiom is far from clear and certain to everyone. It would not have been clear to those who founded the subject of philosophical ethics, Socrates and Plato. They treated its subject as the virtues necessary to achieve happiness or eudaemonia of the unfragmented human soul. It would not have been clear to Aristotle, who said that while the end of one’s flourishing could include the good of one’s parents, children, spouse and friends, “we must impose some limit; for if we extend the good to parents’ parents and children’s children and to friends of friends, we shall go on without limit.”22
If Sidgwick, Parfit, and Singer find their “intuitive” propositions obvious while the classical founders of their field did not, and it’s not because of some new observation they have made, what accounts for the change in view?23
The most obvious source is the major historical development subsequent to the development of ancient Greek philosophy that influenced countless institutions and authority figures and parents who raised young philosophers. The development in question was the rise of Christianity.
Christianity stressed the importance of submitting to the demands of a higher power greater than oneself. Christianity had enormous influence on moving the West’s conception of morality from a concern for the excellence of one’s character to an obsession with being “bound to regard the good of any other individual as much as his own.” It should not be surprising that in his defense of the effective altruist doctrine, even the secular Singer quotes St. Thomas Aquinas quoting St. Ambrose: “The bread which you withhold belongs to the hungry; the clothing you shut away, to the naked; and the money you bury in the earth is the redemption and freedom of the penniless.”24
Effective altruists who pride themselves in their scientific approach seem oblivious to the possibility that their “intuitions” do not yield access to some “hidden” truth, but are nothing more than emotional reactions conditioned by ideas they’ve accepted uncritically from a social milieu pervaded by two millennia of Christianity.
There’s every reason to think that the irrationality of effective altruism is due, not to the amoralism of its advocates or the foolishness of their idealism, but to the irrationality of their ideal, the ideal of altruism itself.
'Effective altruists who pride themselves in their scientific approach seem oblivious to the possibility that their “intuitions” are nothing more than emotional reactions conditioned by ideas they’ve accepted uncritically from a social milieu… Share on XIn the wake of the FTX scandal, William MacAskill himself weighed in on Twitter to condemn Bankman-Fried, arguing that serious effective altruists do not use naked “end justifies the means” reasoning, and instead value integrity and honesty.25 Yet, as one philosopher responding to MacAskill noted, utilitarians who see some utility in “common sense morality” can be reliably counted on to say we can suspend it when there are big enough stakes.26
Any philosophy is subject to cynical use and manipulation by insincere advocates. But a philosophy that gives a pass to anyone who is not acting for his own sake but only for the sake of others is uniquely subject to abuse. A philosophy that says give up what you love and work for others is a tool in the hands of anyone who is willing to speak on behalf of others to gain power over those willing to give up what they love. It’s an especially dangerous tool when those who gain the power convince themselves that it is noble to use it, not for their own sake, but for the sake of a nameless, faceless future.
How, indeed, would one apply such a philosophy without abusing the lives of the people who practice it? Ayn Rand put her finger on the essence of the altruist morality long before advocates of altruism began to make their meaning as explicit as the effective altruists now have. In her novel Atlas Shrugged, she puts these words in the mouth of one character whose workplace has adopted a similar philosophy: “Do you care to imagine what it would be like, if you had to live and to work, when you’re tied to all the disasters and all the malingering of the globe? . . . To work — on a blank check held by every creature born, by men whom you’ll never see, whose needs you’ll never know, whose ability or laziness or sloppiness or fraud you have no way to learn and no right to question — just to work and work and work — and leave it up to the [altruists] of the world to decide whose stomach will consume the effort, the dreams and the days of your life. And this is the moral law to accept? This — a moral ideal?”27
Do you have a comment or question?
Endnotes
- Sigal Samuel, “Effective Altruism Gave Rise to Sam Bankman-Fried. Now it’s Facing a Moral Reckoning,” Vox.com, November 16, 2022.
- There’s at least some good reason to think Bankman-Fried was practicing what he preached. It seems he was willing to take enormous risks because he thought the stakes were so high, and the stakes for him were the causes he sought to support with his giving. See Sarah Constantin, “Why Infinite Coin-Flipping Is Bad,” SarahConstantin.Substack.com, December 1, 2022. And Bankman-Fried distributed over $100 million to various effective altruist-endorsed charities. See John Hyatt, “Donations to Effective Altruism Nonprofits Tied to an Oxford Professor Are at Risk of Being Clawed Back,” Forbes, November 16, 2022.
- See the “Areas of Interest” listed on the old FTX Future Fund web site, archived here: https://web.archive.org/web/20221114011635/https://ftxfuturefund.org/area-of-interest/artificial-intelligence/
- William MacAskill, What We Owe the Future (New York: Basic Books, 2022), 187–89, 234.
- William MacAskill, What We Owe the Future, 189.
- Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (New York: Oxford University Press, 2014), 123.
- MacAskill, What We Owe the Future, 13; Bostrom, Superintelligence, 123.
- As one puts it: “Picture effective altruists sitting around in a San Francisco skyscraper calculating how to relieve suffering halfway around the world while the city decays beneath them” (Ross Douthat, “The Case for a Less-Effective Altruism,” New York Times, November 18, 2022).
- See Annie Lowrey, “Effective Altruism Committed the Sin It Was Supposed to Correct,” Atlantic, November 17, 2022; Joe Pitts, “Not So Effective Altruism,” National Review, November 25, 2022; Olúfẹ́mi O Táíwò and Joshua Stein, “Is the Effective Altruism Movement in Trouble?,” Guardian, November 16, 2022; Ian Birrell, “Sam Bankman-Fried’s Elitist Altruism,” Unherd.com, November 23, 2022. The charges of cynicism were all the more plausible because (at least out of context) Bankman-Fried’s words to an acquaintance do not speak well of his sincerity: he called his emphasis on ethics “mostly a front” and said “I feel bad for those guys who get f—ed by it, by this dumb game we woke westerners play where we say all the right shibboleths and so everyone likes us” (Kelsey Piper, “Sam Bankman-Fried Tries to Explain Himself,” Vox.com, November 16, 2022.
- Christine Emba, “Why ‘Longtermism’ Isn’t Ethically Sound,” Washington Post, September 4, 2022.
- MacAskill, What We Owe the Future, 4.
- In fairness, Singer himself has been critical of the longtermist program his own ideas seem to have inspired: “Viewing current problems – other than our species’ extinction – through the lens of ‘longtermism’ and ‘existential risk’ can shrink those problems to almost nothing, while providing a rationale for doing almost anything to increase our odds of surviving long enough to spread beyond Earth. Marx’s vision of communism as the goal of all human history provided Lenin and Stalin with a justification for their crimes, and the goal of a ‘Thousand-Year Reich’ was, in the eyes of the Nazis, sufficient reason for exterminating or enslaving those deemed racially inferior” (Peter Singer, “The Hinge of History,” Project Syndicate, October 8, 2021). It’s a fair point, but it’s not clear why Singer doesn’t see that it would apply as easily to his own all-encompassing version of ends-justify-the-means utilitarianism.
- MacAskill, What We Owe the Future, 167.
- Derek Parfit, Reasons and Persons (New York: Oxford University Press, 1984), 357.
- Derek Parfit, Reasons and Persons, 381–90
- Derek Parfit, Reasons and Persons, 361–64.
- See Émile P. Torres, “The Dangerous Ideas of ‘Longtermism’ and ‘Existential Risk’,” Current Affairs, July 2021. Given the way some of their contemporary advocates behave like prophets and advocate tithing or the purchase of the equivalent of indulgences, this criticism is not far-fetched. See Gideon Lewis-Kraus, “The Reluctant Prophet of Effective Altruism,” New Yorker, August 8, 2022.
- Parfit, Reasons and Persons, 454.
- Parfit, Reasons and Persons, 503. Parfit would argue there’s more in support of his view of the self than this. Much of his book features a series of sci-fi/fantasy “thought experiments” meant to challenge the idea that we have an enduring self. But these thought experiments rely on no more than the “intuitions” he goes on to appeal to in defense of his ethical axioms.
- Preface to Famine, Affluence and Morality (New York: Oxford University Press, 2016), pp. xxv–xxix.
- Singer, “Afterword to the 2011 edition,” The Expanding Circle: Ethics, Evolution, and Moral Progress (Princeton, NJ: Princeton University Press, 2011), 200–201.
- Aristotle, Nicomachean Ethics, Terence Irwin translation (Indianapolis, IN: Hackett Publishing, 1985), 15 (1097b10–15).
- See MacAskill’s Twitter comments here: https://twitter.com/willmacaskill/status/1591218014707671040. It’s hard to answer the question on their behalf, when they claim, as Parfit does, that intuition can reveal moral truths that are not truths about any obvious facts in reality (so we can’t offer an account of how these facts could have been hidden for so long). Parfit, On What Matters, Vol. II (New York: Oxford University Press, 2011), 496–7.
- Singer, Famine, Affluence and Morality, 23.
- It’s a point he himself makes in his book, just as utilitarian philosophers who’ve sought to answer common objections to their philosophy have for over a century, by arguing that following “common sense morality” is what actually helps maximize total happiness. (MacAskill, What We Owe the Future, 240–2.)
- Justin Weinberg, “FTX, Moral Philosophy, Public Philosophy,” DailyNous.com, November 18, 2022. One longtermist philosopher (Nick Bostrom), commenting on a series of natural disasters, wars, and pandemics puts it as follows: “tragic as such events are to the people immediately affected, in the big picture of things – from the perspective of humankind as a whole – even the worst of these catastrophes are mere ripples on the surface of the great sea of life” (Nick Bostrom, “Existential Risks,” NickBostrom.com (accessed January 16, 2023). If World War II was a mere ripple, then some tech nerd’s loss of savings from a crypto collapse would be but a faint tremor on the surface of the ripple.
- Ayn Rand, Atlas Shrugged (New York: Signet, 1957), 615–16.