By Juan Clavijo and Jasmine Burnett
Introduction
In our work with social sector innovators for over 30 years, we have partnered with nonprofits, funders, government agencies, and intermediaries to advance equitable outcomes and social justice initiatives. Over this time,we have evaluated many approaches to advance social change, giving us a broad perspective on the types of strategies social sector actors are implementing to advance their goals. Over the past five years or so, and certainly after the COVID-19 pandemic and the social uprisings in 2020, we have seen increased attention to publications about and experimentation with participatory approaches to philanthropy.[1] While there are many different types of participatory practices in philanthropy (community governance models, participatory design, etc.), in this blog we focus on participatory grantmaking (PGM) as a way for funders to decide how to distribute their resources. This approach, which typically involves advisory committees, is defined by GrantCraft as “a grantmaking approach that cedes decision-making power about funding—including the strategy and criteria behind those decisions—to the very communities that funders aim to serve.”[2]
Given the increased interest in PGM that we have seen in the sector, we wondered how funders were assessing these initiatives to know the extent to which they had been successful, so we conducted a light landscape scan across a variety of sources. A study from the University of Washington found that 70% of funders engaging in participatory processes reported not evaluating the results of stakeholder engagement. Among the resources we reviewed, we found little guidance on how to assess these initiatives. Therefore, in an effort to propel the conversation and contribute to the sector’s understanding of PGM and its impact, we developed a framework to help funders, evaluators, and participants of PGM initiatives think about evaluation more deeply and hopefully engage in (and publish) assessments of these initiatives so the sector can learn about its effectiveness as a strategy to advance social change.
Breaking Down PGM into Evaluable Components
To assess progress toward PGM outcomes, we first had to define them. We explored the literature on PGM to identify the core goals practitioners seek to accomplish through these practices and how those desired outcomes map to distinct actors. We applied an approach we often use when building a theory of change: we asked who is expected to change as a result of a PGM initiative and how? Through this approach, we identified three different types of actors we expect to change through PGM programs: non-foundation PMG initiative participants (advisory committee members), the foundation and its staff, and broader community actors. We defined these three types of actors along with sample expected outcomes for each type in Table 1.
Table 1 | Sample Expected Outcomes by Actor Type
|
Actor Type
|
Sample Expected Outcomes[3]
|
- Advisory Committee members: Community members or representatives who are not foundation staff and are engaged in the PGM initiative to inform decisions about how resources are allocated
|
- Improved ability to represent community interests
- Increased influence on grantmaking decisions
- Stronger relationships with foundation, its partners, and community members
- Improved current or gained new skills connected to power building and community leadership
- Sustained engagement in community efforts outside of PGM initiative
|
- Foundation and its staff: The philanthropic institution and the program staff who are implementing and funding the PGM initiative
|
- New knowledge about how PGM enables effective and equitable grantmaking
- Increased resources available for PGM initiatives over time
- Increased confidence among staff about how to implement and manage PGM
- Greater buy in among leadership about the value and impact of PGM
|
- Community actors: The broader set of community members who will be impacted by the work of the foundation, including grantees and the clients or constituents who they impact
|
- Stronger and better resourced grantee partners
- Progress toward community-identified outcomes
- Sustained power-building and community-led efforts outside of PGM initiative
- Improved relationships with foundation, staff, and other partners
|
An Initial Framework
Once we identified these outcomes and types of actors, we worked to determine key questions evaluators might examine to assess progress toward intended goals. From the literature we reviewed, we identified three main categories of evaluation questions for PGM initiatives:
- Process questions: PGM is a decision-making process that outlines who has decision-making power over the allocation of assets. As such, there is great emphasis in the literature about the importance of process in assessing PGM initiatives. Questions about who is involved, how representative participants are of their communities, how the initiative is structured to support engagement and decision-making, and how much funding the initiative can distribute to communities are all process questions about the initiative’s design. These questions are at the center of where most PGM assessments have focused thus far and ask how effectively was the initiative designed to meet its purpose?
- Initiative outcomes: Refers to outcomes or results directly connected to the PGM initiative: To what extent did participants influence grantmaking? What is different for participants, the foundation, and communities as a result of the initiative?
- Broader impact: Refers to broader outcomes the initiative may contribute to like sustained power building within the community, stronger relationships and actor ecosystems, and shifts in practice among the foundation and its partners.
The Participatory Grantmaking Evaluation Framework (see Table 2) brings all of these elements together to present a framework of evaluation questions and data sources an evaluation could consider to assess different types of outcomes across the various actors involved and impacted by this approach.
Table 2|Participatory Grantmaking Evaluation Framework

Data sources and methods
Evaluating PGM initiatives in ways that reflect the distinct roles and experiences of participating actors will require a multi-source and longitudinal approach to data collection. When measuring outcomes for advisory committee members, data may include pre- and post-engagement surveys, facilitated focus groups, and periodic follow-up conversations to examine changes in knowledge, confidence, relationships, and sustained involvement in community work overtime. To measure progress toward foundation outcomes, evaluative evidence can be drawn from the program’s meeting minutes, interviews with program staff and organizational leadership, documentation of governance or decision-making processes that demonstrate internal learning or institutional change, and monitored shifts in the allocation of financial and staff resources (i.e., has more/less money and staff capacity been allocated to a foundation’s PGM initiatives over time). For community actors’ outcomes (including grantees and other community members who may not be directly involved in the advisory committee), data sources may include grantee reporting, interviews or focus groups with non-participating community members, and ongoing observation of community outcomes over time. To synthesize these varied data sources and capture the complex and often emergent nature of power-building and relational change, evaluators may employ participatory and developmental methods such as ripple effect mapping, outcome harvesting, and the most significant change technique. These approaches are well suited to PGM contexts as they can surface and capture qualitative insights, including shifts in diffuse forms of influence, relationships, trust, and decision-making authority. They can also help uncover unexpected outcomes and situate outcomes in level of significance according to the different types of actors involved.
Considerations
As we step back from the framework, literature review, and conversations that went into this process, we offer the following considerations for funders and evaluators of PGM initiatives:
- The field would benefit from a better understanding of the impact of PGM initiatives than what we have seen thus far.
This breakdown of the types of evaluative questions, actors, and expected outcomes in a PGM initiative provides the structure for a more detailed framework to evaluate and assess PGM initiatives. Through this framework, we posit that comprehensive evaluations of PGM initiatives should include elements that answer different types of evaluation questions and consider perspectives and data across different actors. Most assessments thus far have focused on process questions, describing what PGM is and how individual initiatives have been designed or structured in order to attain certain outcomes. We have seen some attention to initiative-specific outcomes, including the extent to which participants have power to influence the process as the main indicator of success. However, we have not seen published accounts that follow PGM participants to assess other benefits of the progress for them, nor have we seen many accounts of how foundations are internalizing what they learn, whether it is shaping their practices, leading to increased or changed grantmaking strategies, or informing their work in other ways. Finally, we are curious to understand the extent to which communities at the heart of the work benefit from grants made through a PGM process. To what community-level outcomes did the PGM initiative contribute (either expected or emergent) and what is different for communities as a result of the PGM initiative? What happens in the community after grants are made in terms of relationships and other systems changes? Answering a broader set of questions would enable a more comprehensive understanding of PGM as a practice, informing better decisions, strategies, and hopefully, outcomes for the actors involved in these processes.
- Funders and evaluators should design learning to inform broader uptake within and across foundations, if that is, in fact, an expected outcome of pilot PGM initiatives.
Ultimately, PGM presents the opportunity to shift power about funding to the very communities that funders aim to serve. PGM initiatives thus far have been small, pilot-like efforts as foundations try this new approach to grantmaking. In absence of published evaluations, we wonder what has happened across these pilot initiatives, what foundations have learned, and what they intend to do moving forward. To what extent are funders truly considering PGM as an alternative approach to transform their grantmaking practices? For those who are considering broader transformations beyond an isolated pilot, what do they need to learn about this practice to inform broader adoption and integration throughout their foundations? Thinking more broadly about the philanthropic sector, what evidence would help foundations writ large better understand the benefits, challenges, and shortcomings of PGM as they consider different ways to structure grantmaking?
Pilot initiatives have tested these waters, but we don’t yet have clarity about the extent to which there is broader uptake of this practice within and across foundations. As evaluators, we assume that evidence can and should inform uptake, and question whether funders are approaching learning about PGM initiatives with enough intentionality to truly drive uptake. Evaluators have a responsibility as well, given their power and influence in designing evaluation efforts. Weighing prioritized questions, available resources, and the potential burden on different actors, evaluators can serve as thought partners to funders in designing fit-to-purpose evaluations that enable the learning funders truly need to make strategic decisions.
Looking Ahead
We offer this framework as an initial guide of how evaluators, implementers, and participants can think about and structure evaluations of PGM initiatives. While not exhaustive, we believe this framework offers a more comprehensive list of questions and data sources to consider when designing a robust evaluation that captures the PGM process as well as the different types of expected and emergent outcomes. Future iterations can expand on outcomes, differentiating between short- and long-term outcomes, data sources, methods, and offer examples of practical applications of the framework. We hope the framework is helpful and inspires further evaluation (and publication) of studies about PGM so that the sector as a whole can learn more about this grantmaking approach as a strategy for shifting power to those most directly impacted by the issues foundations seek to solve, and ultimately, for addressing those issues effectively.
Acknowledgements
We want to thank the various foundations we've spoken with over the past year about their participatory approaches and Katy Love for her support and input in the creation of this framework.
Webinar
Footnotes
[3] This list is not meant to be comprehensive or exhaustive. Rather, it is meant to illustrate some of the different outcomes across actor types that an evaluation might use in its assessment of a PGM initiative.
Citation
Clavjo, J., and Burnett, J. 2025. “A Framework to Evaluate Participatory Grantmaking.” ORS Impact Blog. https://orsimpact.com/blog/A-Framework-to-Evaluate-Participatory-Grantmaking.htm.
Bibliography
Below is a bibliography of sources we reviewed in this work. We also leveraged AI to create a Notebook LM with all of these resources where users can query and interact with the information to learn more about PGM: Participatory Practices and Grantmaking in US Foundations.
Abdo, M., et al. 2023. Participatory Grantmaking: Building the Evidence. Issue Lab. June 13, 2023. https://participatorygrantmaking.issuelab.org/resource/participatory-grantmaking-building-the-evidence.html.
Arnstein, S. R. (1969). A ladder of citizen participation. Journal of the American Institute of Planners, 35(4), 216–224.
Behrens, T., and D. Suarez. 2021. Participatory Practices and Grantmaking: A Landscape Analysis. Seattle: University of Washington Evans School of Public Policy & Governance. https://evans.uw.edu/wp-content/uploads/2021/07/Participatory-Practices-and-Grantmaking-Report-June-2021.pdf.
Chakma, T., et al. 2024. “Expanding Our Understanding of Evidence for Meaningful Participation.” Issue Lab. July 1, 2024. https://participatorygrantmaking.issuelab.org/resource/expanding-our-understanding-of-evidence-for-meaningful-participation.html.
Disability Rights Fund. 2023. “Impact.” Disability Rights Fund. https://www.drafund.org/impact/.
EDGE Funders Alliance. 2018. “What Does Participatory Grantmaking Look Like in Practice?” YouTube video. https://www.youtube.com/watch?v=0__0e01cTGs.
Farfan, C. 2022. “Evaluating What Matters: Capturing Meaningful Outcomes in Participatory Grantmaking Processes.” Fund for Shared Insight. https://fundforsharedinsight.org/viewpoint/evaluating-what-matters-capturing-meaningful-outcomes-in-participatory-grantmaking-processes/.
Ford Foundation. 2019. Request for Proposals: Building the Evidence for Participatory Grantmaking. PACE/Ford Foundation. https://www.pacefunders.org/wp-content/uploads/2019/06/Ford-RFP.pdf.
Gibson, C. M. 2018. Deciding Together: Shifting Power and Resources Through Participatory Grantmaking. New York: GrantCraft. https://www.issuelab.org/resources/32988/32988.pdf.
Gill, J. 2019. “Measuring the Impact of Participatory Grantmaking.” NPC Blog. https://www.thinknpc.org/blog/measuring-the-impact-of-participatory-grantmaking/.
González, R. (2019). Spectrum of community engagement to ownership. Facilitating Power.
Love, K. 2022. “What Impact Can Participatory Philanthropy Have? Ask the Participants.” Fund for Shared Insight. https://fundforsharedinsight.org/viewpoint/what-impact-can-participatory-philanthropy-have-ask-the-participants/.
ORS Impact. 2023. “Participatory Climate Initiative: Learnings and Reflections from Gatherings.” Fund for Shared Insight. October 2023. https://fundforsharedinsight.org/wp-content/uploads/2023/10/Participatory-Climate-Initiative-Learnings-and-Reflections-from-Gatherings-101223.pdf.
Paterson, H. 2021. “Evaluating Participatory Grantmaking.” Medium, October 18, 2021. https://hannah-paterson.medium.com/evaluating-participatory-grantmaking-9bffba0e6797.
Patton, M. Q. 2018. Principles-Focused Evaluation: The GUIDE. New York: Guilford Press. https://www.guilford.com/books/Principles-Focused-Evaluation/Michael-Quinn-Patton/9781462531820/contents.
Stachowiak, S. 2023. “Some Lessons from Participatory Grantmaking and Meditations on Power for the Field.” Fund for Shared Insight. https://fundforsharedinsight.org/evaluation/some-lessons-from-participatory-grantmaking-and-meditations-on-power-for-the-field/.
UpMetrics. 2023. “What Is Participatory Grantmaking? A Guide to Centering Community Voices.” UpMetrics Blog. https://blog.upmetrics.com/participatory-grantmaking#success.