Leverage Research
Exploring the mind and society

EA Failure Scenarios

Effective Altruism

[This series of essays is written for insiders in the Effective Altruism movement, a movement devoted to using reason and evidence to determine how to have the greatest positive impact. For an introduction to EA, read these posts: 1, 2, 3. Or, come to EA Global.]


In this series of essays, we will look at EA failure scenarios: scenarios in which the EA movement is destroyed, disrupted, or otherwise substantially fails to achieve its potential. The series is broken into three parts:

  • Part 1: Introduction (this essay)
  • Part 2: Failure Scenarios (coming soon)
  • Part 3: Reason for Hope (coming soon)

In this essay we introduce the topic of EA failure scenarios and explain why failure scenarios are worth researching and addressing. In Part 2, we lay out the failure scenarios, talk about why each is plausible, and describe what might be done to avert them. In Part 3, we conclude on a hopeful note. Armed with knowledge of how it might fail, we believe the EA movement will be much more likely to succeed.

Importance, Tractability, Neglectedness

If you want to do the most good, it is critical to select a project that is important, tractable, and neglected. To be important, your project must aim to have a large impact. To be tractable, the project must have a sufficiently high probability of success. To be neglected, it must be that if you don't do your project, no one else will, or at least, no one else will do it nearly as well or for a long time. The goal is not to cause positive effects ourselves, but rather to act so that the world ends up better. If your project is not neglected, then if you don't do it, someone else will and the world will end up roughly the same. (For variations and elaborations on these criteria, see here and here.)

EA Failure Scenarios: Importance

For EA failure scenarios to be important, two conditions must be fulfilled: (a) EA has to have the potential to cause a large positive impact, and (b) it must be sufficiently likely that without intervention, the EA movement will fail to fulfill this potential.

Both of these conditions seem to be fulfilled.

Does EA have the potential to cause a large positive impact? The EA movement has already done a lot of good. EAs have saved more than a thousand lives[1] and committed more than a hundred million dollars to effective giving[2]. More than this, EAs may even have diminished existential risk by creating global awareness of the dangers of unsafe AGI[3].

Is that the most EA can do? It seems highly unlikely. The number of EAs is growing rapidly[4]. It is easy to imagine the EA movement having 10 or 100 times as many members, doing things that are on average just as good or better than EAs are doing now. Thus, condition (a) is fulfilled.

Is it sufficiently likely that without intervention, the EA movement will fail to fulfill this potential? In the following essays, we will argue that there are multiple plausible scenarios on which the EA movement does substantially less good than it can. If even one of these failure scenarios is sufficiently plausible, then condition (b) is fulfilled as well. Furthermore, there are general reasons to believe that the EA movement will fail. Most movements fail in the relevant sense. And it seems that EA, or something very much like it, already lived and died once.

EA Failure Scenarios: Tractability

For EA failure scenarios to be tractable, it must be that there are implementable interventions that will substantially reduce the likelihood of failure, and thereby substantially increase the probability that the EA movement will fulfill much more of its potential. In the following essays, after we present each failure scenario we will present specific proposals for how EAs can substantially reduce the likelihood of failing in that way. Considering the sum of these interventions, and other interventions EAs are likely to invent, we believe that EA failure is not inevitable. The success or failure of the EA movement depends on the actions that EAs take.

EA Failure Scenarios: Neglectedness

Finally, it appears that the question of EA failure scenarios has received little attention thus far. While EAs informally discuss EA movement failure fairly frequently, there are essentially no systematic treatments of this topic anywhere. There are no lists of scenarios, debates about which scenarios are the most likely, or sets of proposed interventions. Many actions currently taken by EAs do help the EA movement grow and develop, but without clear failure scenarios in mind, it is hard for the EA community to take concerted action to prevent them.

Thus EA failure scenarios appear to be important, tractable, and neglected. Hence we think they are worth both researching and addressing.

The Scenarios

Thus far we have identified the following EA failure scenarios:

  • Scenario #1: Dilution (essay coming soon)
  • Scenario #2: Subversion (essay coming soon)
  • Scenario #3: Disruption (essay coming soon)
  • Scenario #4: Stagnation (essay coming soon)

In each of the essays that follow, we will present a scenario on which the EA movement fails or is otherwise severely damaged, how to tell whether that scenario is approaching, and recommendations for averting that scenario. Each scenario is represented by a conjunction of claims, and so it will be very improbable than any of the scenarios will transpire exactly as stated. Instead of thinking only about each scenario exactly as described, EAs should think about the set of all sufficiently similar scenarios and assess the probability that some member of that set occurs.

Is our list exhaustive? We doubt it. There are many ways for things to fail and the EA movement is no exception. In addition to paying attention to the failure scenarios we describe, EAs should keep their eyes open for other causes of failure. Through vigilance and concerted effort, we think the EA movement can have 10 times its current impact, or 100 times, or more... and might even succeed.


[1] GiveWell estimates the average cost of an AMF bed net at $5.31, and the cost per child life saved via AMF bed nets at $2,838. This yields the estimate of 535 AMF bed nets per child life saved. EAs have bought more than one million bed nets, which means that EAs have saved more than 1,850 lives via AMF bed nets so far.

[2] EAs have committed more than $350 million to evidence-based poverty interventions via Giving What We Can.

[3] This is obviously harder to judge. The dangers of unsafe AGI reached global awareness in 2015. The primary individuals behind this were Eliezer Yudkowsky (MIRI), Nick Bostrom (FHI), Jaan Tallinn (CSER), Max Tegmark (FLI), Stephen Hawking, Elon Musk, and Stuart Russell, most of whom were trying to warn the world for the sake of doing the most good and preventing the most harm, and the majority of whom (especially the first movers) self-identify as EAs and are leaders of EA organizations. It is thus fair to attribute a substantial portion of the effect to the Effective Altruism movement. Without Eliezer Yudkowsky and Nick Bostrom writing, and without Jaan Tallinn and Max Tegmark organizing, the dangers of unsafe AGI would likely not have reached global awareness for many years. Has this increased or decreased existential risk? Again, this is difficult to judge, but we are inclined to think that this has at least slightly decreased existential risk. This topic should be fully treated later.

[4] The EA Global/Summit events had 60 attendees in 2013, 180 in 2014, more than 800 in 2015, and are projected to have more than 2,000 attendees in 2016. We believe that EA Facebook membership will tell a similar story. These are imperfect but generally representative measures.

Leverage Staff

Compilation piece, written or inspired by several researchers. Opinions expressed represent common views inside Leverage, not necessarily unanimity.

More posts on
Effective Altruism