What we do with comments by foundations about the draft data on them
This blog post is written by Caroline Fiennes, Giving Evidence
Every year, we send to each of the 100 foundations in the cohort our draft data about it, and invite the foundation to check it. This article explains what we do with comments from foundations on the data.
How we generate our draft data about each foundation
Each included foundation is researched by two researchers operating independently, and their answers on each of our ~100 questions are moderated by a third researcher – and sometimes a fourth – to resolve disagreements. We also check the data against that from previous years in order to be sure that changes we have ‘found’ do really exist.
That all produces our draft data.
We then send to each foundation our draft data about it, to give it a chance to check them and comment. This is in the interests of data accuracy.
Responses from foundations and how we deal with them
This year (Year Five), 25 foundations responded. Some just acknowledged the data, or thanked us, or – in a couple of cases – asked whether they could have an extension (answer: possibly!) or whether we could look at their new website which would launch after our research period closes (answer: no, sadly). Some make what a diplomat friend calls “representations”: i.e., points that they would like us to re-consider; or to point out a policy or report that they think is relevant which we missed; or they explain why some public statement is no longer true, etc.
We consider them all. We look at every document, statement, policy which the foundations cite, and we sometimes change our data as a result: it is of course possible that we missed something.
Sometimes we miss something because, though it is public, it is hard to find: our researchers each have up to 90 minutes per foundation, which is a deliberate ceiling to mimic the way that a prospective applicant might look at a foundation. Sometimes things exist which they are unable to locate within that time. Other times, we miss the information because it is in an unexpected place: for example, for one foundation this year, we were unclear about whether it accepted unsolicited proposals. It pointed out that it does not, and that there is a statement to this effect on its website, though right at the bottom in the banner where privacy statements etc. often are, which we missed.
Sometimes we don’t change our data. This happens if the material provided / cited isn’t adequate. It might:
- Be too old: we consider only reports less than three years old;
- Not cover the whole foundation: e.g., for analysis of a foundation’s effectiveness, the criterion is about analysis across all the foundation’s activities, rather than just one programme;
- Be credited elsewhere, e.g., surveys of grantee perceptions count for Q65 (about grantee feedback) but do not also count for Q67 (about the foundation’s effectiveness);
- Not be what the criterion is after, e.g., for the criterion about consulting with communities, one foundation asked this year for credit for a document explaining its strategy. But we found no mention anywhere – including in that document – of having consulted with communities to create that strategy (or anything else).
We are careful to be consistent in how we apply a criterion, both across the cohort and across time. Because this is now Year Five, we answered most of the Foundation Practice Rating’s questions now five hundred times.
Sometimes foundations explain (either publicly or to us privately) why they do not publish a particular thing, e.g., not publishing names of staff working on a programme which might be contentious e.g., human rights in a country which does not welcome those. We deeply understand these concerns. In such cases, we are open to exempting those foundations from those criteria in order to avoid penalising them from practices which are obviously necessary.