Press "Enter" to skip to content


February 11, 2010

The shift away from a philanthropic paradigm based on demonstration projects and government-sponsored replication has also fundamentally altered the role of evaluation in philanthropy.
In the past, evaluation may have been the lynchpin to scaling social impact by proving the effectiveness of a particular intervention and thereby providing the justification for large-scale funding from government or other sources. (Although it has never been clear exactly who those “other sources” might be!)
Now that we are shifting to a paradigm in which we expect nonprofits to scale themselves, evaluation necessarily plays a different role. Funders are looking not only for an effective intervention, but also for an efficient and well-managed organization capable of operating at scale. Evaluation has expanded, therefore, to include performance monitoring and assessments of organizational effectiveness. It is not uncommon for evaluations today to consider organizational dynamics such as the governance structure, leadership succession, use of technology, cost per output, and financial viability of grantees.
More important, evaluation has itself become a means to increase organizational effectiveness.
Although randomized control trials and summative evaluation processes remain important and frequently used, the field of philanthropy is steadily shifting toward an ever-expanding set of formative evaluation techniques that focus on improving efforts underway, rather than isolating and assessing the consequences of completed activities. These techniques emphasize organizational learning and informed decision-making as the key benefits of evaluation, rather than proof of a concept. Often they are modeled on management information systems that for-profit companies might use, providing real-time feedback on key performance indicators that can guide the management of grantees and grantors alike.
A second shift, as yet less developed, is a move away from the evaluation of individual programs or organizations to evaluation processes that track the progress of an entire field. Under the old paradigm, in which philanthropy sought to identify new innovations for others to replicate, it would have made little sense to evaluate a collection of different organizations, each pursuing separate initiatives. No scientific conclusions could be drawn from such a mixed bag of activities. Yet the nonprofit sector has evolved into a complex ecosystem in which the interrelationships among the different actors are as important as the actions of any one organization. In a recent white paper, Breakthroughs in Shared Measurement and Social Impact, my colleagues at FSG Social Impact Advisors and I documented the emergence of nearly two dozen evaluation systems that provide a common web-based platform for hundreds or even thousands of organizations to track their outputs and outcomes on a commonly defined set of indicators.
Shared measurement systems such as these offer an immense advance in evaluation by allowing data to be gathered rapidly and inexpensively. Even more valuable, however, is the fact that data from different organizations can be compared, enabling best practices to surface and organizations to learn from each other. These systems are already contributing to the learning of funders, such as through the comparative grantee perception data collected at the Center for Effective Philanthropy, and the community foundation operating data at FSG’s Community Foundation Insights division. And, through dozens of other organizations and collective efforts, they are advancing the learning of grantee organizations in fields as diverse as the arts, global health, K-12 education, community foundations, and economic development.
Behind this shift in evaluation lies a fundamental change in attitudes about the creation of knowledge and the sources of innovation. The old paradigm dictated a centralized model of knowledge and innovation in which a foundation, university, or nonprofit organization might research a social issue, propose a new idea to “solve” the problem, then assemble the funding to test the idea, in the hope that it would succeed and be replicated.
The emerging model today, however, takes a systemic and evolutionary approach that begins with the recognition that there are hundreds or thousands of organizations already working on any given social problem. Knowledge and innovation are not theoretically derived and externally imposed on this system. Instead, they bubble up from the collective experimentation of all participants over time. Evaluation systems play their most important role not by isolating and judging any single effort as a success or failure but by enabling organizations to judge their relative effectiveness in comparison to one another on commonly defined performance metrics, and thereby to gradually increase their effectiveness over time.
In other words, the Encyclopedia Britannica has been replaced by Wikipedia as a model for social innovation. We no longer believe that foundations are the R&D arm of society that will bring the “cure” for society’s shortcomings. Instead, they can foster and accelerate social progress through increasing the knowledge and effectiveness of our nonprofit sector, enabling the system to learn from itself.