What we mean by feedback from grantees, and analysis of a foundation’s effectiveness

Dec 4, 2025 | Blog

Author: Caroline Fiennes, Giving Evidence.

The Foundation Practice Rating includes criteria on feedback from grantees and on analysing a foundation’s own effectiveness.  This article explains what we mean by them and are looking for.

Question 65. Does the foundation publish any feedback it receives from grant seekers and/or grantees? – this must be feedback, e.g., suggestions for the foundations

  • We are looking for feedback from grantees about the funder. We do not count here instances where a grantee of a funder has done a survey of its intended beneficiaries (e.g., the funder funds a foodbank and the foodbank has surveyed people whom it serves).
  • The grantee feedback needs to be across all the funder’s operations, not just for one / some funding stream(s).
  • It needs to be systematic, covering views of all the grantees /applicants. We do not count surveys which do not appear to be systematic, e.g., quotes or case studies about a few grantees. That is because good analysis involves the whole picture: quotes or case studies may well have been selected from just the grantees who are most positive and therefore not representative.
  • We want to see the full actual data from the feedback. For example, x% said this, y% said that; graphs and charts. We do not award a point if the foundation states that it has done a survey but does not publish the results. (This is consistent with our approach for other criteria. For example, we do not award a point if the foundation states that it has a trustee recruitment policy but does not publish it.)

The survey and analysis do not need to have been conducted by an external agency. Clearly, doing so allows for anonymous responses so increases the chances of honest and full feedback, but it is not required.

Question 66. Does the foundation publish any actions (however minimal) it will take to address this feedback (what they are doing differently as a consequence)?

This requires statements and commitments by the foundation as to what it will do differently. Foundations that publish only the survey results – even if they include recommendations from a company that did the survey – do not score a point here.

We are looking for commitments: we quite often see statements about ‘points to consider’ arising from the feedback, or ‘the foundation will consider the feedback’, but those are not commitments.

Question 67. Does the foundation publish any analysis of its own effectiveness? (this is effectiveness of the foundation not analysis from the grantees of what they are doing with the funding)

This one is more complicated.

Clearly, grant-makers do not (normally) run programmes. Their effects are vicarious, through the organisations they fund: foundations do not vaccinate people or run shelters for homeless people or teach children to read: rather, it’s the grantees who do that. We are interested here not in what grantees are doing, or who they are or how good they are; rather we are interested in how good the funder is at being a funder.

So we do not award points here for:

  • Analysis of the problem being addressed.
  • Breakdown of where a funder’s money went (eg., by theme, geography, activity, or type of grantee). That is because this does not indicate anything about what the funder is doing well or poorly, and yields no lessons for how it can improve its funding practices. We do not count such breakdowns either for individual programmes nor across a funder’s whole activity.
  • Analysis or breakdown of what grantees are doing (e.g., vaccinating X number of people) or what grantees are achieving (e.g., reduction in malaria rate of Y%).

We do not award points here for feedback from grantees or applicants: these are dealt with under Question 65, above.

Rather, we are looking for analysis of how good the funder is at being a funder: for example any of the following questions, which can provide useful insights for any funder:

  1. How many grants achieve their goals? (We could call this their hit rate). Logging the goal of every grant and tracking whether these goals were met would be a big step forward. The funder could then try to find patterns in those hits and misses.
  2. What proportion of funds are devoted to activities such as preparing proposals or reports for the foundation?

We count analyses of the following types:

1. Analysis of which types of grant or how many grants of each type achieved their goals.

Clearly it’s fine if not every grant works, but a foundation should identify that something didn’t work and understand why it didn’t work, in order to learn. Perhaps it is making mistakes repeatedly simply by not realising that they are mistakes. Strong funders seek to understand which grants worked, and why, and which didn’t, and why not; they are interested in the patterns there, and look to improve their decision-making processes. In other words, when they make ‘yes/no’ decisions – which are at the heart of most funding operations – do they say ‘yes’ on the right occasions? Equally, do they say ‘no’ on the right occasions? – they are interested in what happens to work that they decline to fund but which nonetheless happens, and seeing what they can learn from that. {NB, none of this precludes the funder from taking risk. Indeed, it enables risk-taking because it helps to show how risky something is.}

Examples:

Two bar graphs showing the performance of Shell Foundation grants split by ‘succeeded’, ‘did OK’, and ‘failed’. The graphs show the amount of grants in each category in three phases of time. Both graphs show that as time goes by the percentage of grants classed as 'succeeded' increases.

2. Analysis of the proportion of funds given out which grantees and applicants end up spending on dealing with the foundation, e.g, writing applications or preparing reports for the foundation.

This analysis would not be hard. It would simply involve identifying the costs that applicants / grantees incur at each stage of the funder’s process, and the number of organisations at each stage.

This matters because funders should minimise the burden of their administrative processes. Giving Evidence has seen instances where the application process is so cumbersome, and the funder deliberately solicits so many applications, that the ‘net contribution’ of the funder to the charity sector is very low. You can read more about the administrative burden of many application processes, and how those can be reduced.

3. Analysis of performance of grantees (ie., successful applications) vs. rejected applications.

We have seen this for one funder but sadly unpublished. This was in academic research, where success of projects – at least, on bibliographic metrics – normally eventually becomes visible and is comparable across grants. This funder compared the success of work it funded with that of work which it rejected but which did eventually get funded somewhere. It found… no difference. In other words, its selection process was no better than random, and/or it was adding no value to its grantees. That is worth knowing.

4. Are they making their yes/no decisions in the best way(s)?

Almost all funders make their decisions subjectively, either by soliciting the opinions of experts about a proposal or by interviewing applicants. Research on everything from picking stocks to student admissions shows that humans show weaknesses and biases in allocating scarce resources. The role of biases in foundations’ decisions has not yet been examined (to our knowledge). One funder analysed the effectiveness of the various stages of its selection process: and found that shortlisting applicants on the basis of objective criteria was a better predictor of success (according to bibliographic metrics) than interviewing applicants.

We award a point to foundations that publish any analyses of their performance along the lines of those questions above.

Question 69. Does the foundation publish some information of what it is doing differently as a consequence of this analysis?

This is analogous to Question 67 above: we are interested in specific commitments about actions which the foundation will take in response to findings of the analysis.

Why this matters

Funders should get good at funding. Funding well is not trivial.

And though there has been increasing scrutiny of the performance of operational non-profits – often driven by funders, on whom they rely – there has been much less scrutiny of the performance of funders – presumably because most funders don’t rely on anybody else. (Caroline Fiennes, who runs Giving Evidence and the Foundation Practice Rating research operation, wrote about donor effectiveness in the scientific journal Nature.)

Only by analysing what they do well versus less well can funders learn to improve, and to make their scarce resource achieve more.

The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.