Effective Altruism: The Road to Crazy Town

-

In the coverage of the collapse of cryptocurrency exchange FTX, the press has made much of the devotion of its former CEO, Sam Bankman-Fried, to “effective altruism,” (EA) a community that, at least on paper, tries to maximize the impact of its members’ philanthropic works. But not nearly enough has been written about what the effective altruism movement is doing and the dubious-at-best logic that directs it to conclusions that are both seductive and convenient for its members.

On its official web site, “effective altruism” is defined thusly:

“Effective altruism is a project that aims to find the best ways to help others, and put them into practice.

It’s both a research field, which aims to identify the world’s most pressing problems, and the best solutions to them, and a practical community that aims to use those findings to do good.”

At a high level the goal is inarguably a worthy one: look at areas such as hunger and disease, tackle the biggest problems, and direct resources to the most promising efforts to fix them. The effective altruists are not the first to think about how to improve the effectiveness of the nongovernmental organization (NGO) sector; the noted business strategy guru Michael Porter has followed up his seminal “What is Strategy?” paper with a substantial amount of effort on how to apply it to nonprofits, and other business consultants have followed suit over recent decades.

In truth, there are many layers to such efforts. One can start by simply looking at how to run a nonprofit better to ensure that there is proper focus on outcomes and that money is being raised and used well. But every business exec joins their first non-profit board thinking that it should be straightforward to apply most or all of the principles of running a for-profit business to a non-profit, and many leave frustrated and disappointed later after discovering that non-profits have some unique issues for which the business community offers few solutions.

Effective Altruism certainly plays in that space, often trying to build metrics around nonprofit outcomes in order to allow for apples-to-apples comparisons between organizations that would simplify (or at least quantify) the question of which one is most worthy of support.

But EA proponents have a more ambitious agenda: they want to do the same kind of analysis on the causes themselves. Is it better to invest in fighting malaria, or HIV – or is it better to invest in fighting hunger instead? Should one support efforts to combat illicit drug use in the United States or COVID vaccination efforts in south Asia where the same amount of money goes much further?

Trying to come up with a framework for comparing different options for “doing good” is a complicated business that quickly descends into philosophy, as it inevitably arrives at a discussion of the value of a human life. At perhaps the simpler end of the spectrum, in some cases it’s possible to just add up the number of lives that would be saved. Though even then the math gets harder when probability comes into play: new, promising programs in particular that still need to be developed and deployed deal in lives “probably saved” rather than hard facts.

But at the other end of the spectrum things get very challenging when we discuss improving the quality of lives rather than saving lives; how many people can be lifted out of poverty, or provided more nutritious food or a better education?

The EA community doesn’t shy away from any of this; true to the “quant” backgrounds of many of the newly-wealthy tech bros in the community, they have adopted a concept from probability theory called “expected value” in order to convert their assessments into hard numbers.  “Expected value” is basically a way of applying a weighted average to the potential outcomes of an event: it averages all the possible outcomes based upon their individual probabilities.

For example, if you’re betting $1 on a double-or-nothing coin flip, then 50% of the time you’ll make $1 and 50% you’ll lose your dollar; the expected value of the coin flip is the average of all the outcomes, or $0. Expected value, as a concept, is useful for understanding the long-term outcome of an event that happens repeatedly: if you bet on a coin flip once the result will never be the expected value of $0, but if you do it 100 times in a row you will likely come very close to breaking even.

You can also apply the concept of expected value to buying a $1 lottery ticket. Suppose you have a one-in-a-million chance of winning $10 million, but otherwise your ticket is worthless. The expected value of a $1 ticket is $10 – that sounds great!  Except that you’re going to buy an awful lot of lottery tickets before you are likely to hit that jackpot – and even then you’re not guaranteed to do so.  This points out the big caveat with using expected value: in the short term, it’s almost always wrong. Its accuracy is connected to events that repeat often; buying a single lottery ticket is almost certainly a losing proposition.

Investors use expected value as one tool – but not the only one — to evaluate the potential outcome of a new business venture or an acquisition.  For example, a new product might have a 20% chance of making $5 million, a 30% chance of breaking even, and a 50% chance of losing the initial investment. I can’t emphasize enough that expected value is never the only valuation tool that sophisticated investors use, and for most it’s not even the primary one. But it can offer some insights.

The EA community, however, seems to have adopted expected value as a primary tool in their framework for evaluating both causes and individual organizations: the expected value of the number of lives saved, or persons vaccinated, or meals delivered. At that level alone it’s problematic because it’s essentially using the concept to place a value on a single event – not one that repeats frequently. But the problems with what they are doing run much deeper, because the community also wants to calculate expected values for lives improved, based not only on how many people have their life improved, but also on the size of the improvement.

And this is where the philosophy gets entangled. The basis of their model is an arm of classical philosophy called “consequentialism,” that holds that moral decisions should be based upon their outcomes (rather than intent). A better-known offshoot of consequentialism is “utilitarianism,” the idea that moral decisions should be taken to maximize the utility of the result, where “utility” is a measurement of the amount of good done, minus the amount of harm caused.

In the simplest case where there are no harms, utilitarianism is a seductive approach: choose whichever approach yields the most good. In practice it’s more complicated because the “good” may, once again, defy direct comparison: how much better is someone’s life when they get better healthcare, versus better education? But adding in (or more accurately subtracting out) the harms done quickly brings to light the underlying weakness of utilitarianism, especially when one is adjudicating by “expected value.” We are forced to ask ourselves whether it’s acceptable to make one person’s life worse in order to make someone else’s life better.

Some of the cases of this are simple and easily dismissed: for most people it’s a no-brainer to tax a wealthy person a small amount in order to pay for food and shelter for a poor person; the rich person’s reduction in quality of life is barely noticeable, whereas the poor person’s life is greatly improved. But where do you draw the lines?

The Chinese government relocated entire towns in order to build out water reservoirs behind new, massive dams on the nation’s major rivers. Here in the U.S. during the Tuskegee syphilis experiments, doctors intentionally infected Black men (without their knowledge or consent) in order to learn more about the disease in the hopes of saving many more lives later. Is it acceptable to cause great harm to a small number of people in the pursuit of the probability of improving the lives of others – either a great improvement to a small number of people, or a marginal improvement to millions?

If you’re just going by the expected value, you inevitably arrive at the “Repugnant Conclusion” – that for any large population with high quality of life, you can conceive of a much larger population with a lower quality of life but higher total “utility.”

It gets worse. A more recent trend within the EA community is “longtermism,” the notion that future lives also have (measurable) value. There will be billions of people born between now and the end of the twenty-first century; how do they fit into our calculations of the “expected value” of a particular act?  It’s a question posed by William MacAskill in his book, “What We Owe the Future”; both MacAskill and his book are celebrated among longtermists.

But this line of thinking distorts the (mis)use of “expected value” – and utilitarianism in general – to absurdity. The sustainable population of the Earth may max out at around fifteen billion people, but space travel could potentially allow for hundreds of billions of humans on other planets: once you accept that truth, the “expected value” of any investment in space travel, no matter how large (or no matter how much harm it causes) is positive – and likely enormous.

One author describes this as the inevitable “train to crazy town” for EA and especially for longtermism: once you allow for future lives, the potential positive utility, even at very low probability, becomes so large that any short-term action can be justified. That’s pretty convenient when you’re a multi-billionaire who thinks that space rockets are cool.

It’s worth noting, though, that the “longtermism” framework isn’t entirely absurd, because it forces us to consider extinction events. If a global pandemic has the potential to wipe out humanity in its entirety, we should be considering investments that would help us avoid (or respond to) one. Similarly, we could look at investing in responses to an asteroid on a crash-course with Earth.

That said, expected value still isn’t the best way to assess such investments, since even if an extinction event is low-probability, multiplied by eight billion lives it takes on outsized importance. It follows the same logic as the lottery ticket: a one-in-a-million chance that it saves humanity, but otherwise it’s wasted resources.

But we’re not done exploring the depths of the tortured logic that the EA community has adopted, because “expected value” has also pervaded their personal philosophy. Many in the community have adopted the idea that they should earn as much money as they can with the commitment to give the vast majority of their wealth away through philanthropy. Unfortunately, that philosophy doesn’t place any limits on how their personal fortunes should be obtained, other than – you guessed it – its expected value for utility. They can use the same math to personally justify Ponzi schemes and other illegal or immoral approaches to getting rich so long as they can convince themselves that they will use the money to generate more utility than would be generated by the people they are ripping off. Within this dubious moral framework we can recognize the personal motivations and actions of Sam Bankman-Fried.

Finally, we can place the capstone on this journey by looking at what causes and organizations the EA community has decided are most promising. GiveWell, an EA-aligned nonprofit recommender site, pursues investments in fighting malaria, addressing vitamin A deficiency, and creating incentives for childhood vaccines.  But the EA web site also points out a tendency for the EA community to fund themselves, including researching “AI alignment” and the “Global Priorities Institute.”

It also points to a project to conduct “human-challenge trials” where a set of persons are deliberately infected with a disease in order to speed vaccine development – a textbook example of harming a few (though hopefully with their consent) in order to provide quality of life improvements to a much larger group of people.

This is the world that the Effective Altruism folks have created for themselves: an echo chamber in which they have created the moral justification to get as rich as they possibly can, and a framework that can easily by misused to create post-hoc justifications to spend it on essentially whatever they want in the name of utilitarian and “longtermist” goals. Author and pundit Cory Doctorow calls this “empiricism washing”: creating a veneer of credibility for your actions by associating them with a framework that produces numerical assessments, no matter how dubious they are.

So what can be done about this?  Frankly, not much. As the Golden Rule says, “he who has the gold makes the rules.” As of this writing, Bankman-Fried, who was the darling of the EA community until FTX imploded, is being labeled as an embezzler but few are connecting the dots to the larger problems with the philosophy that led him to this place. He may have fallen from grace, but there are plenty of other rich people – those who inherited wealth, those who made a fortune through legitimate means, and the grifters alike – who are still very much bought into the cult of EA, longtermism, and the belief that a strictly empirical approach to “doing good” is the best way to live their lives.

Kevin Schofield
Kevin Schofieldhttp://sccinsight.com
Kevin is a city hall reporter and the founder of SCC Insight, a web site focused on providing independent news and analysis of the Seattle City Council and Seattle City Hall in general. In a previous life, he worked for 26 years in the tech industry in a variety of positions but most notably as the COO of the research division at Microsoft. Kevin volunteers at the Woodland Park Zoo, where he is also on the Board of Directors. He is also the Vice Chair of the Board of Trustees of Harvey Mudd College.

6 COMMENTS

  1. Great explainer, Kevin. Hard-core EA-ers seem to live in a virtual world constructed entirely of edge case thought experiments. A handy way to justify anything.

  2. Enlightening article, Kevin. Too bad some of the wealthy have taken this torturous route. Better to look at how McKenzie Scott does her giving. Check out her website, Yield Giving. It lists organization to which she has given $14 billion (think of how many zeros that is) with no strings attached. It is an impressive to read. A couple of examples: Habitat Clallam County, LEAP, Madres and the Steve Fund.

  3. Philanthropy used to trust the experts. If United Way has vetted various organizations, then just give to United Way and let them disperse the donations as they see fit. That came to be known as “throwing money over the wall,” and the new breed of philanthropists wanted to leapfrog those validating organizations and pick winners (and ego-extensions) themselves and then apply rigorous tests to see what was working. A second development was to trust the employees as scouts for worthy organizations, and then to match their targeted gifts handsomely (as Microsoft does).

    By now, it’s a jungle of motives and egos and accountabilities. It’s easy to discern hidden motives (such as greenwashing) in many of these donations. Another problem is that this subjective philanthropy is unstable, as the donors get tired of certain areas and are swept up by the pressure from peers to go to the next hot thing.

  4. As a confirmed “Consequentialist”, I would be happy if both my wealthy neighbors and my City government were more focused on the outcomes rather than the intentions of their actions.

  5. Point of information:
    To what extent is EA actually alive as a guide in giving large sums? Any examples? Notable donors? Foundations? Organizations? (Beyond those devoted to spreading EA.)

    My first reaction to EA is to start laughing.

Leave a Reply to Jean Godden Cancel reply

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Comments Policy

Please be respectful. No personal attacks. Your comment should add something to the topic discussion or it will not be published. All comments are reviewed before being published. Comments are the opinions of their contributors and not those of Post alley or its editors.

Popular

Recent