We examine prevailing practices and pivotal issues in philanthropy concerning how foundations learn and improve and support their grantees. This work documents evaluation strategies, structures and processes and explores how they can be improved for better practice. We commission teaching cases on important initiatives or strategies that shed light on the ramifications of decisions that foundations make about what to evaluate, when to evaluate, and how evaluations are ultimately used.

We also write commentary and thought pieces about foundation effectiveness to encourage discussion and debate in the field.


We use in-depth teaching cases to help people learn by taking an unvarnished examination of how an evaluation unfolds in the context of a major foundation initiative or experience.

The Shaping of Evaluation at the William and Flora Hewlett Foundation (2016)
Tells the story of the evolution of an evaluation function at the Hewlett Foundation from 2000 to 2015. Examines how one foundation’s culture shaped that evolution and key decisions about the function's purpose, structure, resourcing, positioning, and more.

First Among Equals: The Evaluation of the J.W. McConnell Family Foundation Social Innovation Generation Initiative (2014)
About a non-traditional approach to strategy in which the foundation did not play the role of primary strategist, but rather recognized—and tried to operate as if—it was but one of many interconnected strategists. It is a story in which all players—the foundation, the developmental evaluator, and the grantees—had to continually adapt to keep this initiative moving forward.

Paul Hamlyn Foundation Learning Away Programme [UK Evaluation Roundtable] (2014)
Focuses on the evaluation of an ambitious educational programme that attempted to use evaluation to support strategic learning. Chronicles challenges that included a mismatch between evaluation purpose and evaluation design, as well as trying to simultaneously meet the needs of both the Foundation and grantees.

Evaluation of the David and Lucile Packard Foundation’s Preschool for California’s Children Grantmaking Program (2012)
Tells how external evaluators and program staff took a risk on a nontraditional approach to evaluation. Called “real-­time evaluation”, it aims to promote strategic learning by providing regular, timely information about how a strategy is unfolding, which organizations then use to inform future work and make course corrections.

Measuring Change While Changing Measures: Learning In, and From, the Evaluation of Making Connections (2010)
Describes the evaluation of a multi-site, decade-long community change effort by the Annie E. Casey Foundation that aimed to improve outcomes for the most vulnerable children by transforming their neighborhoods and helping their parents achieve economic stability, connect with better services and supports, and forge strong social networks.

Death is Certain; Strategy Isn't: Assessing the Robert Wood Johnson Foundation's End-of-Life Grantmaking (2008)
Describes a strategic assessment of the Robert Wood Johnson Foundation's 20-year investment in end-of-life grantmaking. Illustrates the issues raised in conducting an assessment of a strategy rather than of a single program or initiative.

Looking For Shadows: Evaluating Community Change in the Annie E. Casey Foundation Plain Talk Initiative (2006)
Explores the evaluation of a large multi-city initiative that sought to make contraceptives available to sexually active youth to reduce pregnancy and sexually-transmitted diseases. The evaluation design included within-site and cross-site analysis and featured multiple data collection strategies, including a baseline and follow-up survey, a heavy emphasis on qualitative research, and review of administrative data.

The Devolution Initiative Evaluation: Innovation and Learning at the W. K. Kellogg Foundation (2004)
Linked to a foundation initiative designed to foster learning about government investments to devolve major responsibilities for welfare reform and health care policy from the federal government to the states. The evaluation had multiple components, which included providing timely, continuous feedback to the foundation about how the initiative was—or was not—working.

Changing Stakeholder Needs and Changing Evaluator Roles: the Central Valley Partnership of the James Irvine Foundation (2003)
Describes the evolution of the evaluator's role as the program evolved and developed, and as the needs of the client and intended users changed over time. The initiative aimed to assist immigrants in California's Central Valley.

Evaluation of the Robert Wood Johnson Foundation Fighting Back Initiative
Describes an evaluation of a large-scale, multi-site effort to harness community-generated strategies to reduce the use and abuse of alcohol and illegal drugs. The findings of this ontroversial study were sharply questioned by stakeholders and the case illustrates how important issues of program design affected evaluation findings.

Home Visitation: A Case Study of Evaluation at The David and Lucile Packard Foundation (2002)
Describes how a long-term investment in evaluation was instrumental to a foundation strategy to support home visitation and education to parents about effective interaction with their young children. The case illustrates the substantial impact an evaluation can have on a field.


We research and write about philanthropic strategy, evaluation and learning practices to inform and improve foundation effectiveness.

How Shortcuts Cut Us Short: Common Cognitive Traps in Philanthropic Decision Making
Center for Evaluation Innovation, May 2014

Highlights five common cognitive traps that can trip up philanthropic decision making, and suggests eleven straightforward steps to counteract them.

Eyes Wide Open: Learning as Strategy Under Conditions of Complexity and Uncertainty
The Foundation Review, Fall 2013

Foundation strategy can be hampered by a failure to recognize and engage with the complexity and uncertainty surrounding foundation work. This article identifies three common “traps” that hinder foundation capacity to learn: 1) linearity and certainty bias; 2) the autopilot effect; and 3) indicator blindness.

Benchmarking Evaluation in Foundations: Do We Know What We are Doing?
The Foundation Review, Summer 2013

Based on 2012 research, offers eight findings about what foundations are doing on evaluation and discusses their implications.

Necessary and Not Sufficient: the State of Evaluation Use in Foundations
October 2011

Explores the extent to which foundations evaluate the results of their work (based on data from our 2009 benchmarking study) and highlights an important and interesting variation among those foundations where evaluation units report directly to the CEO.

Beyond the Veneer of Strategic Philanthropy
The Foundation Review, Winter 2010

"Strategic philanthropy" has become a dominant theme among foundations in the past few decades. While many foundations have developed strategic plans, few have made the internal changes necessary to behave strategically. Examines four key challenges to strategic philanthropy.

Evaluating Strategy
New Directions for Evaluation, Winter 2010

Examines what it means to evaluate strategy. How is strategy different from a theory of change or a logic model? Reviews how strategy is perceived in different sectors and offer a framework for evaluating strategies.

The Evaluation Conversation: A Path to Impact for Foundation Boards and Executives
The Improving Philanthropy Project, October 2006

Raises five key questions for foundation executives and board members to consider so that their evaluation efforts can best support the foundations work and become a vital institutional tool to achieve philanthropic purpose and improve strategy.

Teaching Evaluation Using the Case Method
New Directions for Evaluation, Spring 2005.

Presents high-quality evaluation teaching cases developed specifically for use with the case method.

Practice Matters: The Improving Philanthropy Project
Ten issue-focused papers tackle some of the most compelling "best practice" issues in the field of philanthropy. Practice Matters seeks to raise the bar on grantmaker effectiveness for both newcomers and experienced foundation staff who wish to sharpen their skills.


2016 Benchmarking Report on Evaluation in Foundations
Developed in collaboration with the Center for Effective Philanthropy, the most comprehensive data collection effort to date on evaluation practices at foundations. Shares data points and infographics on evaluation at 127 foundations in the U.S. and Canada that give at least $10 million annually, or are members of the Evaluation Roundtable.

2012 Benchmarking Report on Evaluation in Foundations
Benchmarks the function and positioning of evaluation in foundations, including the range of activities used to produce evaluative information. It also explores perceptions of the adequacy of resources (staff time and money) dedicated to evaluation-related work and how well foundations use evaluative information.

2009 Benchmarking Report on Evaluation in Foundations
Highlights several trends that include the decrease in investment in evaluative studies, the expanded role of evaluative staff in program strategy, and the increased reliance on metrics. Also surfaces the importance of the CEO's role in effectively promoting the use of evaluative information.

2006 Evaluation Roundtable Action Steps: Making Evaluation Matter
Discusses participants' diagnosis of the problem—i.e., the obstacles and issues that keep foundations from getting the most out of evaluation, and presents an agenda for action with steps to help foundations benefit from evaluation.


© 2014 Evaluation Roundtable