Empathy in Evaluation: A Meta-Analysis Comparing Evaluation Models in Refugee and Displaced Settings
Empathy in Evaluation: A Meta-Analysis Comparing Evaluation Models in Refugee and Displaced Settings
By Dr. Fatima T. Zahra
Hello, my name is Fatima T. Zahra. I am an Assistant Professor of Evaluation, Statistics, and Methodology at the University of Tennessee, Knoxville. My research examines the intersection of human development, AI, and evaluation in diverse and displaced populations. Over the past decade, I have worked on projects that explore the role of evaluation in shaping educational and labor market outcomes in refugee and crisis-affected settings. This post departs from a purely technical discussion to reflect on the role of empathy in evaluation practices—a quality that is often overlooked but profoundly consequential. For more information about the work that I do check out my website.
Evaluation is typically regarded as an instrument for assessing program effectiveness. However, in marginalized and forcibly displaced populations, conventional evaluation models often fall short. Traditional frameworks prioritize objectivity, standardized indicators, and externally driven methodologies, yet they frequently fail to capture the complexity of lived experiences. This gap has spurred the adoption of empathy in evaluation, particularly participatory and culturally responsive frameworks that prioritize community voices, local knowledge, and equitable power-sharing in the evaluation process. The work in this area is substantially underdeveloped.

Why Does This Matter?
My recent meta-analysis of 40 studies comparing participatory, culturally responsive, and traditional evaluation models in refugee and displaced settings underscores the importance of empathy-driven approaches. Key findings include:
- Participatory evaluations demonstrated high levels of community engagement, with attendance and participation rates ranging from 71% to 78%. Evaluations that positioned community members as co-researchers led to greater program sustainability.
- Culturally responsive evaluations yielded statistically significant improvements in mental health outcomes and knowledge acquisition, particularly when interventions incorporated linguistic and cultural adaptations tailored to participants’ lived experiences.
- Traditional evaluations exhibited mixed results, proving effective in measuring clinical outcomes but demonstrating lower engagement (54% average participation rate), particularly in cases where community voices were not integrated into the evaluation design.
The sustainability of programs was not dictated by evaluation models alone but was strongly influenced by community ownership, capacity building, and system integration. Evaluations that actively engaged community members in decision-making processes were more likely to foster lasting impact.
Lessons from the Field
In our research on early childhood development among Rohingya refugees in Bangladesh, initial evaluations of play-based learning programs suggested minimal paternal engagement. However, when we restructured our approach to include fathers in defining meaningful participation—through focus groups and storytelling sessions—engagement increased dramatically. This shift underscored a critical lesson: evaluation frameworks that do not reflect the lived realities of marginalized communities risk missing key drivers of success.
Similarly, in a study examining the impact of employment programs in refugee camps, traditional evaluations focused primarily on income and productivity, overlooking the psychological and social effects of work. By incorporating mental well-being as a key evaluation metric—through self-reported dignity, purpose, and social belonging—we found that employment offered far more than economic stability. These findings reinforce an essential principle: sustainable impact is most likely when evaluation is conducted with communities rather than on them, recognizing the full spectrum of human needs beyond economic indicators.
Rethinking Evaluation: A Call for Change
To advance the field of evaluation, particularly in marginalized and displaced settings, we must adopt new approaches:
- Power-sharing as a foundational principle. Evaluation must shift from an extractive process to a collaborative one. This means prioritizing genuine co-creation, where communities influence decisions from research design to data interpretation.
- Cultural responsiveness as a necessity, not an afterthought. Effective evaluation requires deep listening, linguistic adaptation, and recognition of cultural epistemologies. Without this, findings may be incomplete or misinterpreted.
- Expanding our definition of rigor. Methodological validity should not come at the expense of community relevance. The most robust evaluations integrate standardized measures with locally grounded insights.
- Moving beyond extractive evaluation models. The purpose of evaluation should extend beyond measuring impact to strengthening local capacity for continued assessment and programmatic refinement.
Looking Ahead
The field of evaluation stands at a pivotal juncture. Traditional approaches, which often prioritize external expertise over local knowledge, are proving inadequate in addressing the complexity of crisis-affected populations. Empathy in evaluation (EIE) methodologies—those that emphasize cultural adaptation, power-sharing, and stakeholder engagement—offer a path toward more just, effective, and sustainable evaluation practice.
For scholars, this shift necessitates expanding research on context-sensitive methodologies. For practitioners, it demands a reimagining of evaluation as a process that centers mutual learning rather than imposing external standards. For policymakers and funders, it calls for investment in evaluation models that are adaptive, participatory, and aligned with the needs of affected populations.
As evaluators, we hold a critical responsibility. We can either reinforce existing power imbalances or work to build evaluation frameworks that respect and reflect the realities of the communities we serve. If we aspire to generate meaningful knowledge and drive lasting change, we must practice empathy, cultural responsiveness, and community engagement at the core of our methodologies.
Additional Resources
For those interested in deepening their understanding of these concepts, I highly recommend the following works:
- Evaluation in Humanitarian Contexts:
- Mertens, D. M. (2009). Transformative Research and Evaluation. Guilford Press.
- Culturally Responsive Evaluation:
- Hood, S., Hopson, R. K., & Kirkhart, K. E. (2015). Culturally Responsive Evaluation. In K. E. Newcomer, H. P. Hatry, & J. S. Wholey (Eds.), Handbook of Practical Program Evaluation (4 ed., pp. 281-317). Jossey-Bass. https://doi.org/10.1002/9781119171386.ch12
- Participatory Research in Development Settings:
- Chouinard, J.A., Cousins, J.B. The journey from rhetoric to reality: participatory evaluation in a development context. Educ Asse Eval Acc 27, 5–39 (2015). https://doi.org/10.1007/s11092-013-9184-8
- Empathy in Evaluation:
- Zahra, F. T. (n.d.). Empathy in Evaluation. https://www.fatimazahra.org/blog-posts/Blog%20Post%20Title%20One-gygte
- Empathy and Sensitivity to Injustice:
- Decety, J., & Cowell, J. M. (2014). Empathy and motivation for justice: Cognitive empathy and concern, but not emotional empathy, predict sensitivity to injustice for others (SPI White Paper No. 135). Social and Political Intelligence Research Hub. https://web.archive.org/web/20221023104046/https://spihub.org/site/resource_files/publications/spi_wp_135_decety.pdf
Final Thought: Evaluation is a mechanism for empowerment and is more than just an assessment tool. Evaluators have the capacity to amplify community voices, shape equitable policies, and drive sustainable change. The question is not whether we can integrate empathy into our methodologies, but whether we choose to do so.