Blending Evaluative and Organizational Development

This is a guest post by NNCG member Saphira M. Baker, Principal of Communitas Consulting, and Anita McGinty.  It was originally published on Stanford Social Innovation Review, and it is re-posted here with her permission.

In the social sector, most people tend to see program evaluations as high-stakes endeavors designed to confirm the value of specific programmatic work. And yet the findings often feel irrelevant or unactionable to the very people who do that work. Results may lack coherence with what the organization intuitively knows about itself, its culture, its beneficiaries, and its history.

The two of us recently had the opportunity to blend our respective expertise in program evaluation and organizational advising in a highly collaborative way. We worked as a team with Robins Foundation, a family foundation in Richmond, Virginia, to evaluate its flagship investment in an early childhood program. The foundation approached Communitas Consulting (Saphira’s practice) to better understand the impact of its investment and recommend a future organizational design. Because the foundation’s questions were both organizational and programmatic, Saphira reached out to Anita, an independent evaluator, to join as research director for the project. Together, we designed an integrated approach.

This collaboration from the start was unusual; organizational advisors often follow evaluators, picking up where the other left off. Evaluators come in to solve a puzzle of what happened and to present a truthful rendition of program results. Organizational consultants come in to diagnose a situation, and—after building a trusting partnership with leadership and staff—recommend a new direction, or strengthen the organization’s structures and practices.

As is often the case, there was a lot at stake in the assessment of the early childhood program Robins funded. Historically, the program had been the foundation’s signature initiative. This resulted in a very public building with the foundation’s family name on it, filled with young children enrolled in a busy child care center. A relatively new foundation president was taking the risk of asking her board to re-examine a project that some of them had proudly founded and funded, and that was backed by national studies and a costly business plan that made the case for success. The nonprofit director had been in place only one year. We knew we would need a high degree of cooperation, trust, and breathing space for all to delve into the many questions, and potential political and reputational liabilities behind the evaluation.

Our approach to the assessment integrated the skills and viewpoints of an evaluator and organizational consultant each step of the way, and multiplied the learning for all. One unanticipated result was insight into how the foundation could better manage and monitor several of its “big bet” investments and offer the support grantees needed to succeed.

We believe this approach can work equally well for other evaluation and organizational development projects for three reasons:

1. There are no artificial boundaries between questions.

Often, identifying the true motivation for an evaluation is a missed opportunity. If the focus is strictly programmatic, issues of governance, leadership, or political realities are not likely to surface early on. This oversight can lead to a narrow research design, prolonged periods of uncertainty and stress for the nonprofit being evaluated, and added cost and unanswered questions for funders seeking data-driven change. A strong collaborative team can reduce this by eliminating the need to sequence questions of program impact from questions of organizational priorities, culture, or structures.

For example, Robins Foundation initially came to us to answer the programmatic question: Did the work it funded over the past five years make a difference? By examining this question through both evaluator and organizational development lenses during initial conversations, we learned the foundation’s motivation for the analysis was driven in part by internal questions about its own funding approach and the role it should play in program governance.

Had we kept to the program evaluation focus, we would have missed discovering that in the early years of the project, the foundation exercised significant influence on the hiring, program design, and selection of community partners to the nonprofit. This finding allowed us to recommend and model a more transparent and equal partnership between the foundation and its grantees.

2. There is greater flexibility throughout the evaluation.

When you look at a nonprofit organization and its programmatic impact simultaneously, it’s easier to make on-the-ground adaptations to the evaluation plan that can help inform decision-making. There is no waiting period between reports.

During our evaluation, for example, we found that the program was originally less grounded in what the customers—parents of young children—wanted in an early childhood initiative than informed by “best practices” in the field. Although this history was not central to the evaluation because it had happened a decade before, it was extremely important to the process of forming organizational recommendations for the future. Our research team was on hand to design a strong community survey and set of focus groups to help identify the systemic barriers facing families. These efforts resulted in a high-quality, research-based portrait of the community at no added cost, followed by detailed organizational recommendations based on the data. We found, for example, that parents in the neighborhood wanted safe spaces for their young children, and had few trusting relationships with peers or program staff. This impacted the final program designs we recommended.

In short, our collaborative approach allowed for greater discovery, adaptation, and resolution within a compressed period of time, and empowered the nonprofit to put new and relevant information to work.

3. Trust comes through continuous learning.

Learning from an evaluation is most likely to stick when researchers and organization staff have strong relationships and build trust throughout the process, rather than experience the one-way delivery of an evaluation report. And compared to traditional evaluators, organizational consultants may be more attentive to a team’s receptivity to receiving and acting on findings.

Fearing that the threat of reduced funding for the program would stymie transparency and collaborative learning, we asked the foundation to guarantee the nonprofit two years of level funding, starting the clock with the assessment. It generously agreed. We then created a small planning team, with members from both the foundation and the nonprofit, to guide the research and organizational assessment work. In this way, our “client” became both the foundation and the executive of the program we were evaluating. It also established a safe space for reflection and troubleshooting, and generated useful information throughout the assessment process.

Of course, not all nonprofit organizations or philanthropies can afford this collaborative approach to evaluation. Those that can’t should nevertheless consider asking: Is our motivation for a program evaluation primarily to assess the results with our target population, or do we also want to change our organizational focus, develop new relationships, or shift a funding strategy? If you want to work on one of the latter, consider a broader lens. In hiring a consultant, can you find someone who has both nonprofit capacity-building and program evaluation expertise, and who can understand the organizational and community context? Will you be in a position to pivot if you learn something new? Will your team be able to receive results as they are emerging, and engage in honest analysis and problem solving?

When possible, a collaborative and simultaneous approach to evaluation and organizational development—and the precise and pragmatically packaged data it can produce—can enable both nonprofit and funders to take bold action. Our team brought rigor and depth to the program evaluation in a way that was responsive to a problem-solving lens. The approach allowed us to maintain a focus on the broader dynamics of grant-grantee relationships and organizational culture and history, and ensured that our evaluation work was contextualized, digestible, and action-oriented.

Facebook
Twitter
LinkedIn