Foundations share similar needs and dilemmas as nonprofits and the public sector when it comes to evaluation. The Council on Foundations’ website lists reasons for involving and even expanding the philanthropic community in evaluation work, such as gauging the appropriateness of the objectives of the grants or the likelihood of raising additional funds. While foundations often recognize the benefits of evaluation, they may be challenged to identify the right models, devote resources to evaluation, and may worry about how resulting data will reflect upon their work. Even with these concerns, a silver lining may lie in an unexpected place—finding failures. Foundations are in a unique position when they learn that certain programs or approaches don’t work. They can convene leaders to problem-solve, support new innovations, and direct their resources differently. For example: Funding the right things A leading evaluator, Dr. Michael Scriven, suggests that evaluation is a way to move beyond the belief that the philanthropic sector is “doing worthwhile things with their time, and money…doing them with good intentions and not for self-interest, and producing good results[1].” Evaluation allows you to stretch your beliefs and know the impact of your work, who it affects, and whether or not it is even needed. Supporting organizations to be the most effective Evaluation can help grantees understand what they are doing well, and where they can do even better. Using the right grant giving strategies The Gates Foundation reports that the information gathered through evaluation helps them to learn and adapt. Foundations can use evaluation to explain their work to the public, build trust, and define the type of work and programs they support. Most foundations use one or more of three primary evaluation strategies. They:
  1. Gather evaluation data from their grantees (reporting forms or requiring an evaluation report)
  2. Conduct cross-grant evaluation (often by contracting with a third party evaluator)
  3. Monitor key community indicators (to help foundations identify needs and observe changes and trends)
 One lesson the Improve Group has learned through conducting evaluations for foundations is that evaluating advocacy efforts has particular challenges. Findings from a discussion with stakeholders--both fellow grantors and grantees—revealed that when focusing on a big “win”, such as a policy change or new rules adopted, more intermediate successes may be ignored. With this in mind, foundations can work alongside the advocacy groups they support to come up with intermediate indicators by which to measure their progress and success. Examples of evaluation by innovative foundations  NOTE:  Sara Stalland is a student of the University of Minnesota’s Humphrey School of Public Affairs and is working towards her Master’s in Public Policy.  She’s currently interning with the Improve Group, assisting on data collection
[1] Scriven, M. (1996). The theory behind practical evaluation. Evaluation, 2(4), 393-404.

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine