Projections vs. evaluations

Most grants are awarded based on the predicted impact of spending. Certificates can be awarded based on a retrospective evaluation of impact.  The 2015 Impact Purchase is exclusively purchasing certificates for already-completed activities.

Both approaches have significant advantages, and should probably coexist as complements.

The basic case

Considerations in favor of retrospective evaluations:

  • Predictions are quite expensive, especially reasonable ones. Monitoring compliance can be annoying on both sides.
  • Predictions are not very accurate. In some sense this is an unavoidable problem: we need to make predictions in order to decide what to do. But funders are sometimes (often?) not the best people to make predictions—they know less about the project, they have less relevant expertise, and they often have less time.
  • Incentives for performance seem preferable to incentives for successful fundraising and storytelling.
  • Evaluations produce especially useful information as byproducts: if I am deciding what to do, I would probably prefer look at 10 evaluations of completed projects than look at 10 grant reviews. Evaluations also seem to have larger positive social effects.

The first two problems contribute to a few possible distortions:

  • Hard-to-predict philanthropic projects are unnecessarily hard to get paid for.
  • Small scale activities are unnecessarily hard to finance.
  • Small scale donors have an unnecessarily hard time giving away money.

Capital allocation

One reason to make predictions instead of retrospective evaluations is that funders’ predictions are needed in order to allocate capital. Many projects wouldn’t go ahead unless a large funder committed to paying for them. (Under the certificates system, this would be implemented by a philanthropist funding a project in exchange for a share of whatever social impact it produces.) However, there are many cases in which this is not necessary:

  • Many small projects require modest amounts of labor and very little capital.
  • Successful small projects can be scaled up gradually, with proceeds from early successes being used to finance growth.
  • Implementers, people close to the project, and philanthropists with a strong interest in an area can provide financing.  This shifts the work of prediction from a large funder to someone who can do it better—without requiring the large funder to trust someone much in advance. This model is extremely common in the for-profit sector.


Realistic predictions of impact are often based (heavily) on extrapolation from past performance (so that projects already need to start small and scale up, with early adopters and implementers taking a risk in the hopes that larger funders will subsequently become interested). So the retrospective model is not quite as alien as it seems. Making this arrangement implicit seems to have a few disadvantages:

  • Restricting funding for similar activities, as evaluated by the funder, seems to be worse than letting the implementer decide for themselves how they want to follow-up on an early success. It ends up involving a bunch of unnecessary effort for the grantee, distortions in what the grantee does, and a bunch of extra effort for the funder.
  • Early stage funders are not compensated for successful efforts. We end up with a potential shortfall of early stage funding, and a lack of accountability and feedback for early stage funders.
  • In general, it seems good to say what you are doing and why, and be more transparent about your decision-making process. Implicit evaluations of past performance seem more likely to be the product of political maneuvering and zero-sum conflict, and explicit evaluations would probably mitigate these factors.