Impact evaluations

We will make our best effort to quantitatively compare the impacts of submitted projects. We expect this to be challenging.


We make absolutely no guarantees about the quality of our evaluations. Where possible, we will try to be unusually explicit and transparent, but the actual evaluations will be quite rough. Making decisions explicitly and publicly, without also being unusually thorough or being unusually conservative, opens our process up to justified criticism. We look forward to criticism and will treat it as a learning opportunity, but we will not apologize for inaccurate or indefensible evaluations. We will just try to do better next time.

We will not be able to evaluate all projects. We will be willing to accept most projects at some price, but the price may reflect a large discount due to our inability to evaluate some or all of the project’s impacts. For some projects we will just admit defeat and decline to make an award. In general, our evaluations should be thought of as lower bounds rather than “best guesses” for the value of the projects (although of course they will often be mistaken in both directions). This is especially important to keep in mind given that evaluations reflect our views not only about projects themselves, but about narrow empirical questions, big-picture issues, and values.

The amounts of money being awarded are quite small compared to the costs of extremely thorough evaluations. Applicants should bear this in mind. That said,  our evaluations will sometimes be more extensive than the amounts of money involved would justify—we may use concrete cases as an opportunity to explore important methodological or empirical issues, or questions which we expect to occur again in the future or be of interest to others.


Applications will describe the project (preferably by pointing to an existing description and/or to the project’s outputs themselves). Based on this description we will do our best to make an informed evaluation: we may think about the project, do cursory desk research, talk to people (both the applicant and others), and/or initiate public discussion.

Applications may include a brief argument for impact and/or summary of available evidence. In general, if we need information we will try to ask applicants for it. Explaining the argument for impact, or the available evidence, is most important if the referees may not know what they don’t know.

Unobservable characteristics of projects will be treated relatively pessimistically. This is a procedural rather than epistemic decision—we don’t necessarily think that such projects are unlikely to be impactful, but we do think that it is better for us not to fund them through this mechanism.

Some evaluations may be discussed publicly, before or after awards are made. Applicants should be aware that their project may be discussed publicly by us and by others. Although we will attempt to encourage respectful discussion, we will also be forthright about our views and the basis for our decisions. Would-be applicants should only apply if they are open to public and potentially critical discussion.