• Request Info
  • Visit
  • Apply
  • Give
  • Request Info
  • Visit
  • Apply
  • Give

Search

  • A-Z Index
  • Map

Educational Leadership and Policy Studies

  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Programs
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Programs
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
Home » Archives for Jonah Hall

Navigating Ambiguity and Asymmetry: from Undergraduate to Graduate Student and Beyond

Navigating Ambiguity and Asymmetry: from Undergraduate to Graduate Student and Beyond

Navigating Ambiguity and Asymmetry: from Undergraduate to Graduate Student and Beyond

June 15, 2025 by Jonah Hall

Navigating Ambiguity and Asymmetry: from Undergraduate to Graduate Student and Beyond

By Jessica Osborne, Ph.D. and Chelsea Jacobs

Jessica is the Principal Evaluation Associate for the Higher Education Portfolio at The Center for Research Evaluation at the University of Mississippi. She earned a PhD in Evaluation, Statistics, and Measurement from the University of Tennessee, Knoxville, an MFA in Creative Writing from the University of North Carolina, Greensboro, and a BA in English from Elon University. Her main areas of research and evaluation are undergraduate and graduate student success, higher education systems, needs assessments, and intrinsic motivation. She lives in Knoxville, TN with her husband, two kids, and three (yes, three…) cats. 

My name is Chelsea Jacobs, and I’m a PhD student in the Evaluation, Statistics, and Methodology (ESM) program at the University of Tennessee, Knoxville. I’m especially interested in how data and evidence are used to inform and improve learning environments. In this post, I’ll share reflections — drawn from personal experience and professional mentorship — on navigating the ambiguity and asymmetry that often define the transition from undergraduate to graduate education. I’ll also offer a few practical tips and resources for those considering or beginning this journey. 

Transitioning from undergraduate studies to graduate school is an exciting milestone, full of possibilities and challenges. For many students, it also marks a shift in how success is measured and achieved. We — Jessica Osborne, PhD, Principal Evaluation Associate at The Center for Research Evaluation at the University of Mississippi, and Chelsea Jacobs, PhD student at the University of Tennessee — have explored these topics during our professional networking and mentoring sessions. While ambiguity and asymmetry may exist in undergraduate education, they often become more pronounced and impactful in graduate school and professional life. This post sheds light on these challenges, offers practical advice, and points prospective graduate students to resources that can ease the transition. 

From Clarity to Exploration: Embracing Ambiguity in Graduate Education 

In undergraduate studies, assessments often come in the form of multiple-choice questions or structured assignments, where answers are concrete and feedback is relatively clear-cut. From a Bloom’s Taxonomy perspective, this often reflects the “remembering” domain. Success may align with effort — study hard, complete assignments, and you’ll likely earn good grades. 

Graduate school, however, introduces a level of ambiguity that can be unexpectedly challenging. Research projects, thesis writing, and professional collaborations often lack clear guidelines or definitive answers. Feedback becomes more subjective, reflecting the complexity and nuance of the work. For example, a research proposal may receive conflicting critiques from reviewers, requiring students to navigate gray areas with the support of advisors, peers, and faculty. 

These shifts are compounded by a structural difference: while undergraduates typically have access to dedicated offices and resources designed to support their success, graduate students often face these challenges with far fewer institutional supports. This makes it all the more important to cultivate self-advocacy, build informal support networks, and learn to tolerate uncertainty. 

Though ambiguity can feel overwhelming, it’s also an opportunity to develop critical thinking and problem-solving skills. Graduate school encourages asking deeper questions, exploring multiple perspectives, and embracing the process of learning rather than focusing solely on outcomes. 

How to Navigate Ambiguity 

Embrace the Learning Curve: Ambiguity is not a sign of failure but a necessary condition for growth—it pushes us beyond routine practice and encourages deeper, more flexible thinking. Seek opportunities to engage with complex problems, even if they feel overwhelming at first, as these moments often prompt the most meaningful development. 

Ask for Guidance: Don’t hesitate to seek clarification from advisors, peers, or those just a step ahead in their academic journey. Opening up about your struggles can reveal how common they are — and hearing how others have navigated doubt or setbacks can help you build the resilience to keep moving forward. Graduate school can be a collaborative space, and connection can be just as important as instruction. 

In the ESM program at UTK, we’re fortunate to be part of a collaborative, non-competitive graduate environment. This isn’t the case for all graduate programs, so it’s an important factor to consider when choosing where to study. 

Uneven Roads: Embracing the Asymmetry of Growth 

As an undergraduate, effort is often emphasized as the key to success, but the relationship between effort and outcome isn’t always straightforward. Study strategies, access to resources, prior preparation, and support systems all play a role — meaning that even significant effort doesn’t always lead to the expected results. However, success can align with effort—study hard, complete assignments, and you’ll likely earn good grades. 

In graduate school and professional life, this symmetry can break down. You might invest months into a research paper, only to have it rejected by a journal. Grant proposals, job applications, and conference submissions often yield similar results—hard work doesn’t always guarantee success, but it does guarantee learning. 

This asymmetry can be disheartening, but it mirrors the realities of many professional fields. Learning to navigate it is crucial for building resilience and maintaining motivation. Rejection and setbacks are not personal failures but part of growth. 

How to Handle Asymmetry 

Redefine Success: Focus on the process rather than the outcome. Every rejection is an opportunity to refine your skills and approach. 

Build Resilience: Mistakes, failures, and rejection are not just normal—they’re powerful learning moments. These experiences often reveal knowledge or skill gaps more clearly than success, making them both memorable and transformative. Cultivating a growth mindset helps reframe setbacks as essential steps in your development. 

Seek Support: Surround yourself with a network of peers, mentors, and advisors who can offer perspective and encouragement. 

Resources for Prospective Graduate Students 

Workshops and seminars can help students build essential skills — offering guidance on research methodologies, academic writing, and mental resilience. 

Here are a few resources to consider: 

  • Books: Writing Your Journal Article in Twelve Weeks by Wendy Laura Belcher is excellent for developing academic writing. The Writing Workshop, recommended by a University of Michigan colleague, is a free, open-access resource. 
  • Research Colloquium: UTK students apply research skills in a colloquium setting. See Michigan State University’s Graduate Research Colloquium for a similar example. These events are common — look into what your institution offers. 
  • Campus Resources: Don’t overlook writing centers, counseling centers, and mental health services. For example, Harvard’s Counseling and Mental Health Services provides a strong model. Explore what’s available at your school. 
  • Professional Networks: Join organizations or online communities in your field. This can lead to mentorship, which is invaluable — and worthy of its own blog post. 

Final Thoughts 

Ambiguity and asymmetry are not obstacles to be feared but challenges to be embraced. They help develop the critical thinking, problem-solving, and resilience needed for both graduate school and a fulfilling professional career. By understanding these aspects and using the right resources, you can navigate the transition with confidence. 

To prospective graduate students: welcome to a journey of growth, discovery, and MADness — Meaningful, Action-Driven exploration of methods and measures. We’re excited to see how you’ll rise to the challenge. 

Filed Under: Evaluation Methodology Blog

My Journey In Writing A Bibliometric Analysis Paper

My Journey In Writing A Bibliometric Analysis Paper

June 1, 2025 by Jonah Hall

My Journey In Writing A Bibliometric Analysis Paper

As a third-year doctoral student in Evaluation, Statistics, and Methodology at the University of Tennessee, Knoxville, I recently completed a bibliometric analysis paper for my capstone project on Data Visualization and Communication in Evaluation. Bibliometrics offers a powerful way to quantify research trends, map scholarly networks, and identify gaps in literature. It is an invaluable research method for evaluators and researchers alike. Hello everyone! I am Richard D. Amoako. 

Learning bibliometrics isn’t always straightforward. Between choosing the right database, wrangling APIs, and figuring out which R or Python packages won’t crash your laptop, there’s a steep learning curve. That’s why I’m writing this: to share the lessons, tools, and occasional frustrations I’ve picked up along the way. Whether you’re an evaluator looking to map trends in your field or a researcher venturing into bibliometrics for the first time, I hope this post saves you time, sanity, and a few coding headaches. Let’s explore the methodology, applications, and resources that shaped my project. 

Understanding Bibliometric Analysis 

Bibliometric analysis is the systematic study of academic publications through quantitative methods- examining citations, authorship patterns, and keyword frequencies to reveal research trends. Bibliometric analysis differs from traditional literature reviews by delivering data-driven insights into knowledge evolution within a field. Common applications include identifying influential papers, mapping collaboration networks, and assessing journal impact (Donthu, et al., 2021; Van Raan, et a., 2018; Zupic & Čater, 2015). 

For evaluators, this approach is particularly valuable. It helps track the adoption of evaluation frameworks, measure scholarly influence, and detect emerging themes, such as how data visualization has gained traction in recent years. My interest in bibliometrics began while reviewing literature for my capstone project. Faced with hundreds of papers, I needed a way to objectively analyze trends rather than rely on subjective selection. Bibliometrics provide that structure, turning scattered research into actionable insights. 

Key Steps in Writing a Bibliometric Paper 

Defining Research Objectives 
The foundation of any successful bibliometric study lies in crafting a precise research question. For my capstone on data visualization in evaluation literature, I focused on: “How has the application of data visualization techniques evolved in program evaluation research from 2010-2025?” This specificity helped me avoid irrelevant data while maintaining analytical depth. Before finalizing my question, I reviewed existing systematic reviews to identify underexplored areas – a crucial step that prevented duplication of prior work. When brainstorming and refining your thoughts, utilize productive technologies such as generative AI tools (such as ChatGPT, Claude, Perplexity, Google Gemini, Microsoft Copilot, DeepSeek, etc.)  to enhance and clarify your ideas.   

Database Selection and Data Collection 
Choosing the right database significantly impacts study quality. After comparing options, I selected Scopus for its comprehensive coverage of social science literature and robust citation metrics. While Web of Science (WoS) offers stronger impact metrics, its limited coverage of evaluation journals made it less suitable. Nonetheless, I examined the potential applications of using WoS. Google Scholar’s expansive but uncurated collection proved too noisy for systematic analysis. Scopus’s ability to export 2,000 records at once and include meta-data such as author affiliation, country proved invaluable for my collaboration mapping. 

Data Extraction and Automation 
To efficiently handle large datasets, I leveraged R’s Bibliometrix package. Use this R script to automate your data extraction with the Scopus API (Application Programming Interface). APIs enable software systems to communicate with each other. Researchers can use APIs to automate access to database records (like Scopus, WoS) without manual downloading. To access the Scopus database, request access via Elsevier’s Developer Portal. 

Pros: Good for large-scale scraping. Cons: Requires API key approval (can take days or weeks).  

For targeted bibliometric searches, carefully construct your keyword strings using Boolean operators (AND/OR/NOT) and field tags like TITLE-ABS-KEY() to balance recall and precision – for example, my search TITLE-ABS-KEY(“data visualization” AND “evaluation”) retrieved 37% more relevant papers than a simple keyword search by excluding off-topic mentions in references. 

After exporting Scopus results to CSV, a simple script converted and analyzed the data (Aria & Cuccurullo, 2017): 

library(bibliometrix) 

M <- convert2df(“scopus.csv”, dbsource = “scopus”, format = “csv”) 

results <- biblioAnalysis(M) 

This approach provided immediate insights into citation patterns and author networks.  

Data Screening and Cleaning 
The initial search may return many papers; my search returned over 2,000. To narrow down the most relevant articles, you can apply filters such as: 

  1. Removing duplicates via DOI matching [use R code, M <- M[!duplicated(M$DO), ] #Remove by DOI. Duplicates are common in multidatabase studies.  
  1. Excluding non-journal articles 
  1. Excluding irrelevant articles that do not match your research questions or inclusion criteria 
  1. Manual review of random samples to verify relevance 

Additional data cleaning may be required. I use R’s tidyverse, janitor or dplyr packages for these tasks.  

The screening process can be overwhelming and time-consuming if performed manually. Fortunately, several tools and websites are available to assist with this task. Notable examples include abstrackr, convidence.org, rayyan.ai, AsReview, Loonlens.com, and nested-knowledge. These tools require well-defined inclusion and exclusion criteria. It is essential to have thoroughly considered criteria in place. Among these tools, my preferred choice is Loonlens.com, which automates the screening process based on the specified criteria and generates a CSV file with decisions and reasons upon completion. 

Analysis and Visualization  

Key analytical approaches included (refer to the appendices for R codes and this guideline): 

  • Citation analysis to identify influential works 
  • Co-authorship network mapping to reveal collaboration patterns 
  • Keyword co-occurrence analysis to track conceptual evolution 
  • Country and institution analysis to identify geographical collaborations and impacts 

For visualization, VOSviewer creates clear keyword co-occurrence maps, while CiteSpace helps identify temporal trends. The bibliometrix package streamlined these analyses, with functions like conceptualStructure() revealing important thematic connections. Visualization adjustments (like setting minimum node frequencies) transformed initial “hairball” network diagrams into clear, interpretable maps.  

This structured approach, from precise question formulation through iterative visualization – transformed a potentially overwhelming project into manageable stages. The automation and filtering strategies proved particularly valuable, saving countless hours of manual processing while ensuring analytical rigor.  

All the R code I used for data cleaning, analysis, and visualization is available on my GitHub repository. 

Challenges & How to Overcome Them 

Bibliometric analysis comes with its fair share of hurdles. Early in my project, I hit a major roadblock when I discovered many key papers were behind paywalls. My solution? I leveraged my university’s interlibrary loan/resource sharing system and reached out directly to authors via ResearchGate to request for full text – some responded with their papers. API limits were another frustration, particularly with Scopus’s weekly request cap (20,000 publications per week). I used R’s httr package to space out requests systematically, grouping queries by year or keyword to stay under Scopus’s weekly limit while automating the process. In addition to utilizing the API, you may access Scopus with your institutional credentials to manually search for papers using your key terms. You can then export your results in various formats such as CSV, RIS, and BibTex. 

The learning curve for R’s Bibliometrix package nearly derailed me in week two. After spending hours on error messages, I discovered the package’s excellent documentation and worked through their tutorial examples line by line. This hands-on approach helped me master essential functions within a week. 

Perhaps the trickiest challenge was avoiding overinterpretation. My initial excitement at seeing strong keyword clusters nearly led me to make unsupported claims. Consult with your advisor, a colleague or expertise in your field to help you distinguish between meaningful patterns and statistical noise. For instance, I found that a seemingly important keyword connection was just due to some prolific author’s preferred terminology. 

For clarity in my visualization, I use a consistent color scheme across visualizations to help readers quickly identify key themes. I used blue for methodological terms, green for application areas, and red for emerging concepts. This small touch markedly improved my visual’s readability. 

Conclusion 

This journey through bibliometric analysis has transformed how I approach research. From crafting precise questions to interpreting network visualizations, these methods bring clarity to complex literature landscapes. The technical hurdles are real but manageable – the payoff in insights is worth the effort. 

For those just starting, I recommend beginning with a small pilot study, perhaps analyzing 100-200 papers on a focused topic. The skills build quickly. 

I’d love to hear about your experiences with bibliometrics or help troubleshoot any challenges you encounter. Feel free to reach out at contact@rd-amoako.com or continue the conversation on research forums and other online platforms. Let’s explore how these methods can advance our evaluation and research  practice together. 

Interested in seeing the results of my bibliometric analysis and exploring the key findings? Connect with me via LinkedIn  or my blog. 

View an interactive map of publication counts by country from my project:  publications_map.html  

Bibliography 

an Eck, N. J., & Waltman, L. (2014). Visualizing bibliometric networks. In Y. Ding, R. Rousseau, & D. Wolfram (Eds.), Measuring scholarly impact: Methods and practice (pp. 285–320). Springer. 

Aria, M. & Cuccurullo, C. (2017) bibliometrix: An R-tool for comprehensive science mapping analysis, Journal of Informetrics, 11(4), pp 959-975, Elsevier. 

Donthu, N., Kumar, S., Mukherjee, D., Pandey, N., & Lim, W. M. (2021). How to conduct a bibliometric analysis: An overview and guidelines. Journal of Business Research, 133, 285–296. https://doi.org/10.1016/j.jbusres.2021.04.070 

Liu, A., Urquía-Grande, E., López-Sánchez, P., & Rodríguez-López, Á. (2023). Research into microfinance and ICTs: A bibliometric analysis. Evaluation and Program Planning, 97, 102215. https://doi.org/10.1016/j.evalprogplan.2022.102215 

Van Raan, A. F. J. (2018). Measuring science: Basic principles and application of advanced bibliometrics. In W. Glänzel, H. F. Moed, U. Schmoch, & M. Thelwall (Eds.), Handbook of science and technology indicators. Springer. 

Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E. C. M., Tijssen, R. J. W., Van Eck, N. J., & Wouters, P. (2012). The Leiden Ranking 2011/2012: Data collection, indicators, and interpretation. Journal of the American Society for Information Science and Technology, 63(12), 2419–2432. https://doi.org/10.1002/asi.22708 

Yao, S., Tang, Y., Yi, C., & Xiao, Y. (2022). Research hotspots and trend exploration on the clinical translational outcome of simulation-based medical education: A 10-year scientific bibliometric analysis from 2011 to 2021. Frontiers in Medicine, 8, 801277. https://doi.org/10.3389/fmed.2021.801277 

Zupic, I., & Čater, T. (2014). Bibliometric Methods in Management and Organization. Organizational Research Methods, 18(3), 429-472. https://doi.org/10.1177/1094428114562629 

 Resources: 

  • Bibliometrix Tutorial 
  • Scopus API Guide 
  • VOSviewer 
  • CiteSpace Manual  

Data Screening  

Abstractr- https://www.youtube.com/watch?v=jy9NJsODtT8 

Convidence.org- https://www.youtube.com/watch?v=tPGuwoh834A 

Rayyan.ai- https://www.youtube.com/watch?v=YFfzH4P6YKw&t=9s 

AsReview- https://www.youtube.com/watch?v=gBmDJ1pdPR0 

Nested-knowledge- https://www.youtube.com/watch?v=7xih-5awJuM 

R resources:  

My project repository https://github.com/amoakor/BibliometricAnalysis.git 

Packages: 

-tidyverse, – bibliometrix, – rscopus, -janitor 

-pysch, -tm 

httr package documentation: https://httr.r-lib.org/, https://github.com/r-lib/httr 

Analyzing & Visualizing Data 

  • Key Metrics to Explore (See the Bibliometrix Tutorial for more examples): 
  1. Citation Analysis: 

citations <- citations(M, field = “article”, sep = “;”) 

head(citations$Cited, 10) # Top 10 most cited 

  1. Co-authorship Networks: 

networkPlot(M, normalize = “salton”, type = “collaboration”) 

  1. Keyword Trends: 

conceptualStructure(M, field = “ID”, method = “CA”, minDegree = 10) 

Filed Under: Evaluation Methodology Blog

Power BI, Will It Really Give Me Data Viz Superpowers?

Power BI, Will It Really Give Me Data Viz Superpowers?

May 15, 2025 by Jonah Hall

Power BI, Will It Really Give Me Data Viz Superpowers?

What is Power BI?

Power BI is a powerful tool to visualize data.  

It can take multiple large datasets, put them all together, transform them, perform calculations and help you create beautiful visualizations. Think of it as a data wrangler, organizer, and visualizer! Oftentimes, a collection of visualizations is created into a report.  

My name is Jake Working, I am a third-year student in the ESM PhD program at UTK and primarily use Power BI in my day job as a Data Analyst for Digital Learning at UTK. I will briefly discuss some of Power BI’s main functions and point you towards some resources if you want to learn more. 

Why use a data viz software? 

Before we jump into the software, you may be thinking, “why go through all the trouble of learning another software just to create visualizations? Aren’t my [insert your software of choice here] visualizations good enough?” 

Even when you get comfortable and quick in [your software of choice], at the end of the day, these programs’ primary functions are typically to store, present, or analyze your data, not bringing in data with the purpose of creating visualizations. 

The advantage of learning data visualization software like Power BI is that it is designed with visualization as its primary purpose. If you have learned or even mastered creating visuals in another software, you can 100% learn and master visualization software like Power BI. 

What can Power BI do? 

First, Power BI is excellent at bringing in data. You can connect multiple large and different types of data sources to Power BI, transform them, and perform calculations as necessary to prepare visuals. 

For data sources, if you can access the data, Power BI can connect to or import it. Power BI can take flat files (ex. Excel, PDF, or CSV), pull direct (snapshot or live) from a database (ex. MySQL, Oracle, SQL Server), import from a website, R script, Python script, and so many more! Even if you have multiple data sources, you can load as many as you need in and create relationships between your data sources.  

Creating relationships serves as the backbone of your data model if you have multiple data sources. For example, say you have a data source with student demographic data and another with student course information. If both contain a unique identifier, such as their student ID, you can create a relationship between the data sources based on that student ID and Power BI will know which course information connects with which student in your demographic data.  

Most of the mistakes within building a model occur at this step, and it is important to understand how and why you are building your model in a certain way or else you could sluggish, incorrect, or confusing output. I suggest reading Microsoft’s overview of relationships and then later this two-part blog post on Power BI data modeling best practices (part 1, part 2). Warning! This blog post is overly detailed for beginners, but extremely important information to avoid common Power BI pitfalls with relationships. I have had to deal with, and overcome, issues related to cardinality, filtering, and schema structure that are discussed in the blog.  

An overview of Power BI’s capabilities: bringing in multiple sources of data, cleaning data, creating relationships between data sources, and using the data to generate a visual report. 

Once you have identified your dataset, Power BI has abilities to transform your data into clean, workable, data within their Power Query editor. This editor has functionalities like Excel such as updating data types, replacing values, creating new columns, and pivoting data. This is done using the Power Query GUI or its script language, M. These transformation steps can be “saved” to your data source and performed on your data each time Power BI connects to or updates that data source. So, once you have cleaned up your data once, it is done automatically using the steps you already created! 

Power BI can then do complex calculations on your dataset once you’ve loaded it in. It uses a function and reference library called Data Analysis Expressions (DAX, for short) that is like expressions used in Excel. Check out Microsoft’s overview of how DAX can be used within Power BI and the library of DAX functions. In my use within Power BI, I mainly use calculated columns and measures.  

For example, let’s say I have a column in my data set that shows the date a form was submitted in this format: mm/dd/yyyy hr:min:sec. If I want to count the number of forms submitted in the calendar year 2025 and display that value on my report, I can create a measure using the DAX functions. It would look something like this: 

Finally, Power BI’s main function is to create engaging visuals and reports to infer information from your data. Power BI has a workspace that allows you to easily select visuals, drag fields from your data into the visuals, and then edit or customize your visuals. The software is pre-loaded with many useful visuals, but you can search and download additional, user-created, visuals as well. Check out the image below showcasing Power BI’s workspace. 

image from Microsoft (source) 

Visuals can be used together (like in the image) to create a report. These reports can be published in a shareable environment through the Power BI Service so others can view the report. This is how companies create and distribute data reports! 

One exciting feature of Power BI is the ability to use and interact with Microsoft’s AI, Copilot. Copilot is quite intelligent when it comes to understanding and using data and can even help build visuals and whole reports. Check out this three minute demo on Copilot within Power BI to get a sense of its capabilities. 

I want to try! 

If you are interested in poking around Power BI to see if it could be useful for you, you can download the desktop version for free here. I will note that even if you are working on personal projects and have data you want to create visuals from, it may be worth it to try Power BI! 

Microsoft has training, videos, sample data you can play with once you open the program, and a community forum to help with any questions you may have.  

Curious what Power BI can do? Check out some of the submissions from this year’s Microsoft’s Power BI Visualization World Championships! 

Filed Under: Evaluation Methodology Blog

Empathy in Evaluation: A Meta-Analysis Comparing Evaluation Models in Refugee and Displaced Settings

Empathy in Evaluation: A Meta-Analysis Comparing Evaluation Models in Refugee and Displaced Settings

March 15, 2025 by Jonah Hall

Empathy in Evaluation: A Meta-Analysis Comparing Evaluation Models in Refugee and Displaced Settings

By Dr. Fatima T. Zahra

Hello, my name is Fatima T. Zahra. I am an Assistant Professor of Evaluation, Statistics, and Methodology at the University of Tennessee, Knoxville. My research examines the intersection of human development, AI, and evaluation in diverse and displaced populations. Over the past decade, I have worked on projects that explore the role of evaluation in shaping educational and labor market outcomes in refugee and crisis-affected settings. This post departs from a purely technical discussion to reflect on the role of empathy in evaluation practices—a quality that is often overlooked but profoundly consequential. For more information about the work that I do check out my website. 

Evaluation is typically regarded as an instrument for assessing program effectiveness. However, in marginalized and forcibly displaced populations, conventional evaluation models often fall short. Traditional frameworks prioritize objectivity, standardized indicators, and externally driven methodologies, yet they frequently fail to capture the complexity of lived experiences. This gap has spurred the adoption of empathy in evaluation, particularly participatory and culturally responsive frameworks that prioritize community voices, local knowledge, and equitable power-sharing in the evaluation process. The work in this area is substantially underdeveloped. 

A group selfie taken during field work in the Rohingya refugee camps in 2019.

Why Does This Matter?

My recent meta-analysis of 40 studies comparing participatory, culturally responsive, and traditional evaluation models in refugee and displaced settings underscores the importance of empathy-driven approaches. Key findings include: 

  • Participatory evaluations demonstrated high levels of community engagement, with attendance and participation rates ranging from 71% to 78%. Evaluations that positioned community members as co-researchers led to greater program sustainability. 
  • Culturally responsive evaluations yielded statistically significant improvements in mental health outcomes and knowledge acquisition, particularly when interventions incorporated linguistic and cultural adaptations tailored to participants’ lived experiences. 
  • Traditional evaluations exhibited mixed results, proving effective in measuring clinical outcomes but demonstrating lower engagement (54% average participation rate), particularly in cases where community voices were not integrated into the evaluation design. 

The sustainability of programs was not dictated by evaluation models alone but was strongly influenced by community ownership, capacity building, and system integration. Evaluations that actively engaged community members in decision-making processes were more likely to foster lasting impact. 

Lessons from the Field

In our research on early childhood development among Rohingya refugees in Bangladesh, initial evaluations of play-based learning programs suggested minimal paternal engagement. However, when we restructured our approach to include fathers in defining meaningful participation—through focus groups and storytelling sessions—engagement increased dramatically. This shift underscored a critical lesson: evaluation frameworks that do not reflect the lived realities of marginalized communities risk missing key drivers of success. 

Similarly, in a study examining the impact of employment programs in refugee camps, traditional evaluations focused primarily on income and productivity, overlooking the psychological and social effects of work. By incorporating mental well-being as a key evaluation metric—through self-reported dignity, purpose, and social belonging—we found that employment offered far more than economic stability. These findings reinforce an essential principle: sustainable impact is most likely when evaluation is conducted with communities rather than on them, recognizing the full spectrum of human needs beyond economic indicators. 

Rethinking Evaluation: A Call for Change

To advance the field of evaluation, particularly in marginalized and displaced settings, we must adopt new approaches: 

  1. Power-sharing as a foundational principle. Evaluation must shift from an extractive process to a collaborative one. This means prioritizing genuine co-creation, where communities influence decisions from research design to data interpretation. 
  1. Cultural responsiveness as a necessity, not an afterthought. Effective evaluation requires deep listening, linguistic adaptation, and recognition of cultural epistemologies. Without this, findings may be incomplete or misinterpreted. 
  1. Expanding our definition of rigor. Methodological validity should not come at the expense of community relevance. The most robust evaluations integrate standardized measures with locally grounded insights. 
  2. Moving beyond extractive evaluation models. The purpose of evaluation should extend beyond measuring impact to strengthening local capacity for continued assessment and programmatic refinement. 

Looking Ahead

The field of evaluation stands at a pivotal juncture. Traditional approaches, which often prioritize external expertise over local knowledge, are proving inadequate in addressing the complexity of crisis-affected populations. Empathy in evaluation (EIE) methodologies—those that emphasize cultural adaptation, power-sharing, and stakeholder engagement—offer a path toward more just, effective, and sustainable evaluation practice. 

For scholars, this shift necessitates expanding research on context-sensitive methodologies. For practitioners, it demands a reimagining of evaluation as a process that centers mutual learning rather than imposing external standards. For policymakers and funders, it calls for investment in evaluation models that are adaptive, participatory, and aligned with the needs of affected populations. 

As evaluators, we hold a critical responsibility. We can either reinforce existing power imbalances or work to build evaluation frameworks that respect and reflect the realities of the communities we serve. If we aspire to generate meaningful knowledge and drive lasting change, we must practice empathy, cultural responsiveness, and community engagement at the core of our methodologies. 

Additional Resources

For those interested in deepening their understanding of these concepts, I highly recommend the following works: 

  • Evaluation in Humanitarian Contexts:  
  • Mertens, D. M. (2009). Transformative Research and Evaluation. Guilford Press. 
  • Culturally Responsive Evaluation:  
  • Hood, S., Hopson, R. K., & Kirkhart, K. E. (2015). Culturally Responsive Evaluation. In K. E. Newcomer, H. P. Hatry, & J. S. Wholey (Eds.), Handbook of Practical Program Evaluation (4 ed., pp. 281-317). Jossey-Bass. https://doi.org/10.1002/9781119171386.ch12 
  • Participatory Research in Development Settings:  
  • Chouinard, J.A., Cousins, J.B. The journey from rhetoric to reality: participatory evaluation in a development context. Educ Asse Eval Acc 27, 5–39 (2015). https://doi.org/10.1007/s11092-013-9184-8 
  • Empathy in Evaluation:  
  • Zahra, F. T. (n.d.). Empathy in Evaluation. https://www.fatimazahra.org/blog-posts/Blog%20Post%20Title%20One-gygte 
  • Empathy and Sensitivity to Injustice:  
  • Decety, J., & Cowell, J. M. (2014). Empathy and motivation for justice: Cognitive empathy and concern, but not emotional empathy, predict sensitivity to injustice for others (SPI White Paper No. 135). Social and Political Intelligence Research Hub. https://web.archive.org/web/20221023104046/https://spihub.org/site/resource_files/publications/spi_wp_135_decety.pdf 

Final Thought: Evaluation is a mechanism for empowerment and is more than just an assessment tool. Evaluators have the capacity to amplify community voices, shape equitable policies, and drive sustainable change. The question is not whether we can integrate empathy into our methodologies, but whether we choose to do so.  

Filed Under: Evaluation Methodology Blog

Irwin Recognized As Emerging Professional By ACPA

Irwin Recognized As Emerging Professional By ACPA

March 5, 2025 by Jonah Hall

Irwin Recognized As Emerging Professional By ACPA

Courtesy of the College of Education, Health, & Human Sciences

At its recent convention in Long Beach, California, College Student Educators International (ACPA) recognized Lauren Irwin with the Annuit Coeptis Emerging Professionals Award. This prestigious award honors exemplary educators in the early stages of their careers. Irwin was one of five early-career professionals recognized for their contributions to the field.
Irwin, an assistant professor in the department of Educational Leadership and Policy Studies (ELPS) in the College of Education, Health, and Human Sciences (CEHHS), is a long-time ACPA member and was deeply honored to receive the award.

headshot

“ACPA has long been my professional home in student affairs, and it means a lot to receive this recognition,” said Irwin. “The Annuit Coeptis award is ultimately about community and discussion to support the future of our field. As a former student affairs administrator and early-career faculty member, I am honored to be part of this prestigious multigenerational community and to have the opportunity to learn from and with some of the brightest minds in our field.”

Irwin primarily teaches in the College Student Personnel and Higher Education Administration programs. Her research informs student affairs practice, aiming to enhance and affirm the success of both students and practitioners. Her doctoral dissertation, which examined racialization and whiteness in college student leadership programs, earned ACPA’s Marylu McEwen Dissertation of the Year Award. Additionally, her research has been published in numerous scholarly journals.

“I hope to continue centering my commitment to student learning, equity, and inclusion through my teaching, research, and service,” Irwin said.
Through its seven departments and 13 centers, the UT College of Education, Health and Human Sciences enhances the quality of life for all through research, outreach, and practice. Find out more at cehhs.utk.edu

Filed Under: News

Is Your Data Dirty? The Importance of Conducting Frequencies First

Is Your Data Dirty? The Importance of Conducting Frequencies First

March 1, 2025 by Jonah Hall

Is Your Data Dirty? The Importance of Conducting Frequencies First

By Jennifer Ann Morrow, Ph.D.

Data, like life, can be messy. I’ve worked with all types of data, both collected by me and by my clients, for over 25 years and I ALWAYS check my data before conducting my proposed analyses. Sometimes, this part of the analysis process is quick and easy but most of the time it’s like an investigation…you need to be thorough, take your time, and provide evidence for your decision making. 

Data Cleaning Step 3: Perform Initial Frequencies 

After you have drafted your codebook and analysis plan you should conduct frequencies on all of your variables in your dataset, both numeric and string variables. I typically use Excel or SPSS to do this, my colleague Dr. Louis Rocconi prefers R, but you can use any statistical software that you feel most comfortable with. At this step I conduct frequencies and request graphics (e.g., bar chart, histogram) for every variable. This output will be invaluable as you work through your next data cleaning steps. 

So, what should you be looking at when reviewing your frequencies? One thing that I will make note of is any discrepancies in coding between my data and what is listed in my codebook. I’ll flag any spelling issues in my variable names/labels and note anything that doesn’t match my codebook. One thing that I always check is that my value labels (what labels are given to my numeric categories) are the same as my codebook and consistent across sets of variables. Many times, if you are using an online survey software package to collect your data there can easily have been programming mistakes when creating the survey that results in mislabeled values. Also, if you have had many individuals enter data into your database it can increase the chances that mistakes were made during the data entry process. During this step I will also check to make sure that I have properly labeled any values that I’m using to designate missing data and that this is consistent with what I have listed in my codebook.  

Lastly, I will highlight when I see variables that may have extreme scores (i.e., potential outliers), variables with more than 5% missing data, and variables with very low sample size in any of their response categories. I’ll use this output in future data cleaning steps to aid in my decision making on variable modification. 

Data Cleaning Step 4: Check for Coding Mistakes 

At this step I will take my output that I highlighted potential issues with coding and start reviewing and making variable modification decisions at this step. Coding issues are more common when you have data that has been manually entered but you can still have coding errors in online data collection! Any variables that have coding issues I first determine if I can verify the data from the original/another source. For data that has been manually entered I’ll go back to the organization/paper survey/data form to verify the data. If it needs to be changed to the correct response I will make a note of this to fix in my next data cleaning step. If I cannot verify the datapoint (like when you have collected your data anonymously) and the value doesn’t fall in the possible values listed in my codebook then I make a note to set the value as missing when I get to the next data cleaning step.  

Additional Advice 

As I am going through my frequencies I will highlight/enter notes directly in the output to make things easier as I move forward through the data cleaning process. I’ll also put notes in my project notebook summarizing any issues and then once I make decisions on variable modifications, I note these in my notebook as well. You will use the output from Step 3 in the next few data cleaning steps to aid in your decision making so keep it handy! 

Resources

12 Steps of Data Cleaning Handout: https://www.dropbox.com/scl/fi/x2bf2t0q134p0cx4kvej0/TWELVE-STEPS-OF-DATA-CLEANING-BRIEF-HANDOUT-MORROW-2017.pdf?rlkey=lfrllz3zya83qzeny6ubwzvjj&dl=0 

Step 1: https://cehhs.utk.edu/elps/organizing-your-evaluation-data-the-importance-of-having-a-comprehensive-data-codebook/ 

Step 2: https://cehhs.utk.edu/elps/clean-correlate-and-compare-the-importance-of-having-a-data-analysis-plan/ 

https://davenport.libguides.com/data275/spss-tutorial/cleaning

https://libguides.library.kent.edu/SPSS/FrequenciesCategorical

https://www.datacamp.com/tutorial/tutorial-data-cleaning-tutorial

https://www.geeksforgeeks.org/frequency-table-in-r

https://www.goskills.com/Excel/Resources/FREQUENCY-Excel

Filed Under: Evaluation Methodology Blog

Boyd Receives Legacy of Excellence Award From ASCA

Boyd Receives Legacy of Excellence Award From ASCA

February 27, 2025 by Jonah Hall

Boyd Receives Legacy of Excellence Award From ASCA

Karen D. Boyd, professor of practice in the College of Education, Health, and Human Sciences (CEHHS) at the University of Tennessee, Knoxville, received the Raymond H. Goldstone Legacy of Excellence Award by the Association for Student Conduct Administration (ASCA) during its 2025 Annual Conference held in Portland, Oregon.

The Goldstone Legacy of Excellence Award is a new initiative launched from the Goldstone Foundation to recognize distinguished individuals who have impacted the field of student conduct and higher education. The Legacy of Excellence Award annually recognizes a select group of individuals who have left an enduring impact on the profession through significant contributions to the field of student conduct; impactful scholarship and research; and/or leadership within ASCA and other organizations.

Boyd has been a part of ASCA since its inception. Her leadership included Conference Chair, President, and Gehring Academy Chair, as well as authoring multiple publications and presentations and even serving as Interim Executive Director. In addition, Boyd serves as a professor of practice and director of undergraduate education in the department of Educational Leadership and Policy Studies (ELPS).

“It is an honor to be so recognized for doing work in service to the success of my students and colleagues that I have loved so very much,” said Boyd.

Many members, past and present, have benefited from all she implemented in the Association. The future of our field continues to benefit through her role as professor at the University of Tennessee, Knoxville, where her courses are consistently regarded by students as among their favorite and most impactful.

Her work with educating professionals and students about the landmark Dixon v. Alabama case, and her partnership on the documentary regarding the case, has made a significant impact on the conduct field.

The ASCA Annual Conference, spanning from February 5 – February 8, 2025, gathered nearly 650 student conduct and student affairs practitioners for a professional development experience. The awards were presented during the Awards Luncheon on February 6, 2025, where attendees gathered to connect and congratulate the recipients.

Since its inception in 1986, the Association for Student Conduct Administration (ASCA) has been at the forefront of supporting campus judicial officers and student conduct practitioners. ASCA provides members strategic resources, including communities of practice, webinars, intensive-learning opportunities (Donald D. Gehring Academy) as well as partnering with the Raymond H. Goldstone Foundation for scholarship funding. Today, ASCA supports over 2,660 members worldwide and is committed to its mission of serving as a vital resource and advocate in the field of student conduct administration. Learn more at theasca.org.

Through its seven departments and 13 centers, the College of Education, Health, and Human Sciences enhances the quality of life for all through research, outreach, and practice. Find out more at cehhs.utk.edu

Filed Under: News

David Hamilton Recognized as Field Award Recipient

David Hamilton Recognized as Field Award Recipient

February 20, 2025 by Jonah Hall

David Hamilton Recognized as Field Award Recipient

Mr. David Hamilton, Principal at Cumberland Gap High School in the Claiborne County School District, has been named as this year’s recipient of the William J. and Lucille H. Field Award for Excellence in Secondary Principalship for the State of Tennessee.

Pictured from Left to Right: Dr. James Martinez, Mr. David Hamilton, & Mr. Randy Atkins

The Field Award was established to recognize one outstanding secondary school leader each year who demonstrates leadership excellence through commitment to the values of civility, candor, courage, social justice, responsibility, compassion, community, persistence, service, and excellence. Administered by the College of Education, Health, and Human Sciences at the University of Tennessee, the Field Award identifies a Tennessee secondary school principal whose life and work are characterized by leadership excellence and encourages secondary school principals to pause and reflect upon their current leadership practice and to consider their experience, challenges, and opportunities in light of the personal values that they embody. 

The Field Award recipient for this year is Mr. David Hamilton, Principal at Cumberland Gap High School (CGHS) in the Claiborne County School District. Mr. Hamilton has served as the principal of CGHS since 2019, and served as the school’s assistant principal from 2003-2018. During that time, he developed and implemented a program that significantly improved student transition and retention, organized initiatives that paired students and community mentors, spearheaded fundraising efforts that raised over $20,000 for student resources and facility upgrades, and established a year-round food and hygiene pantry that ensures students have access to essential resources.

Mr. Hamilton served as a high school health and physical education teacher in the Claiborne County School District from 1999-2003 and coached high school baseball teams between 2003-2006, and again between 2015-2018. Mr. Hamilton holds a Bachelor of Science degree in Health and Physical Education, and Masters of Arts and Educational Specialist degrees in Educational Administration and Supervision, all from Lincoln Memorial University. The department of Educational Leadership and Policy Studies at the University of Tennessee, Knoxville is proud to name Mr. David Hamilton as this year’s Field Award Winner. Congratulations, Mr. Hamilton! 

Filed Under: News

Giving Yourself Room to Grow is Critical to Long-Term Wellbeing, and In Turn, Success

Giving Yourself Room to Grow is Critical to Long-Term Wellbeing, and In Turn, Success

February 15, 2025 by Jonah Hall

Giving Yourself Room to Grow is Critical to Long-Term Wellbeing, and In Turn, Success

By M. Andrew Young

We’ve all heard (and likely said) “Nobody’s perfect!”, but do we really know how to give ourselves (and others) the proper amount of empathy? 

Hello, my name is M. Andrew Young. I’m a third-year Ph.D. student in the Evaluation, Statistics and Methodology program in the Educational Leadership & Policy Studies department at the University of Tennessee. For the past 5 years now, I have served as a higher education evaluator as a Director of Assessment. In every job I’ve had since I graduated from my undergraduate degree in 2011, I have always weaved the use of data into the fabric of my work tasks, and this degree program and the field of evaluation is my happy place. I’d like to divert from the ‘normal’ type of technical blog posts I’ve written in the past and share something a bit more personal. 

I’ve noticed that in higher education, particularly in graduate and professional programs, there are a lot of highly conscientious people. I am one of them. This anecdotal observation or generalization extends to faculty, staff, and students alike. A year ago, I was doing some research on the changing landscape of evaluation and assessment career skills, and when I looked at how much the landscape has changed post-pandemic, I was astounded how rapidly the culture, values, and demands in the workplace had shifted (see this resource included in my reference section for more info, even though it is even becoming outdated: Essential Post-Pandemic Skills | ACCA Global, 2021).  

The laws of physics demand that for every action there is an equal and opposite reaction, and I have noticed that oftentimes, being conscientious, which is a good thing, is counterbalanced by its less-useful companion: high levels of self-imposed demands for excellence or even perfection. In 2021, Forbes magazine released an article called “Why Failure is Essential to Success” (Arruda, 2021). It is a really good read, and their interview with Dr. Sam Collins was eye-opening. The basic premise is that our culture celebrates and glorifies success; we even idolize overcoming adversity success stories, but we rarely see the numerous and deep failures those success stories encountered along their road to success. We love victory, but do not fully feel the depths of the pain, depression even, or discouragement they waded through along the journey.  

People like me are often so concerned with getting it right the first time and setting a personal standard so high that when we can’t attain it, we immediately sink into an unproductive self-deprecating, self-condemnatory internal dialogue. Doubts gnaw at our own self-concept of our worth and capabilities to succeed, and there is an insidious voice telling us to give up, that we aren’t capable of succeeding, that we are alone or unique in our struggles, and that the effort we put into it won’t result in anything other than wasting our time we could be using by just being satisfied with our current status-quo. 

It is incredible how we can grow without even noticing it in the moment. Let me tell you about Andrew 10 years ago. Andrew worked for a web design and marketing consulting company. The hours were long, the pay was abhorrently low for the job title I had, and I was unhappy and out of my element. The original job I was hired to do was create data visualizations for marketing surveys. It morphed into learning survey instrument development, data cleaning, statistical analysis, search engine marketing, search engine optimization, and website quality assurance. I was not ready for the work because I was not properly trained nor supported by professional development for what I would encounter. I made a LOT of mistakes, and I was unhappy. I recall a conversation with my then supervisor. It was one of those uncomfortable conversations where my work quality didn’t measure up to the demands of the job or their expectations. We were speaking about data visualization, and they gave me a scenario of a creative way to visualize geographical map information. Something was said along the lines of, “This is the type of stuff we are looking for”, and my response was, “I don’t know that I am capable of thinking up those things on my own”.  

When I reflect on that moment, I chuckle at how simplistic that data solution was within the context of my current knowledge. When I look at the types of data analyses I’m capable of and knowledge I possess now through the lens of what I was capable of only two years ago, I can see the growth. When I look at the quality of my work today compared to in the past, distant and recent, there is growth. As a parent of school-aged children now, I see the incredible pressures this culture levies on immediate success and high performance. My middle child, who is four years younger than her older sister, has unrealistic expectations of her own capabilities and limitations, and often finds herself at a comparative disadvantage to her sister. Both my school-aged children have been asked to perform tasks, to which they fail or don’t perform to their level of desire or expectations, and when asked to do it again they’ve huffed in frustration and despair, “I can’t do that, dad!”, to which I always reply, “No. You can’t yet. You CAN figure it out!” 

Oh, if I had learned that lesson earlier in my life. Sometimes we have families with impossible expectations for us. Sometimes we work for employers who want us to perform at a high level, never make mistakes, and are waiting with the hammer held twitchingly above our heads, ready for us to fail. Sometimes our educational system is designed to grind us through the mill at their speed when we really need to back up and master foundational things….the list goes on. 

Let me assure you of some things: you will disappoint those you love. You will make an embarrassing mistake at your job. You will misunderstand a school assignment and get a bad grade. You will send that email or chat message that you didn’t think through well enough. You will forget a deadline. You will get turned down for that promotion. You will receive rejection letters for almost all of those “dream jobs” with the nice salaries you’ve applied for.  

And that’s ok.

Embrace failure. It isn’t the end; it is an opportunity to learn and grow. 

Embrace chuckling at the simpleton’s drivel you produced “back when”; you were proud of it then because it was what you were capable of then.

Pursue growth, not perfection; every project and every challenge are opportunities to get better, so embrace where you’re at.

Finally, never get comfortable. Life is a journey, not a destination, and if we ever deceive ourselves into thinking that we can rest on our laurels, we stop growing. It takes an oak tree a hundred years to tower over its peers. Do you see it now? If we recognize that our journey is about growth, it is ok to be where we are and recognize that growth takes time and persistence.

Cool Extra Resources:

A UTK Class I HIGHLY recommend to study student success: ELPS 595: Student Success in Higher Education 

A book that was instrumental for me understanding wellbeing/belonging/success:  

Quaye, S. J., Harper, S. R., & Pendakur, S. L. (Eds.). (2020). Student engagement in higher education: Theoretical perspectives and practical approaches for diverse populations (Third edition). Routledge. 

Wellbeing/Strengths Assessments: 

Gallup Clifton Strengths: https://www.gallup.com/cliftonstrengthsforstudents/ 

EdResearch for Action: https://edresearchforaction.org/research-briefs/evidence-based-practices-for-assessing-students-social-and-emotional-well-being-2/  

 
Full Reference List: 

Arruda, W. (2021, December 10). Why Failure Is Essential To Success. https://www.forbes.com/sites/williamarruda/2015/05/14/why-failure-is-essential-to-success/ 

Essential post-pandemic skills | ACCA Global. (2021). https://www.accaglobal.com/lk/en/affiliates/advance-ezine/careers-advice/post-pandemic-skills.html 

Evidence-Based Practices For Assessing Students’ Social And Emotional Well-Being. (n.d.). EdResearch for Action. Retrieved January 5, 2025, from https://edresearchforaction.org/research-briefs/evidence-based-practices-for-assessing-students-social-and-emotional-well-being-2/ 

Quaye, S. J., Harper, S. R., & Pendakur, S. L. (Eds.). (2020). Student engagement in higher education: Theoretical perspectives and practical approaches for diverse populations (Third edition). Routledge. 

Singh, A. (2021, August 23). The top data science skills for the post-Covid world. https://www.globaltechcouncil.org/data-science/the-top-data-science-skills-for-the-post-covid-world/ 

Filed Under: Evaluation Methodology Blog

Clean, Correlate, and Compare: The Importance of Having a Data Analysis Plan

Clean, Correlate, and Compare: The Importance of Having a Data Analysis Plan

February 7, 2025 by Jonah Hall

Clean, Correlate, and Compare: The Importance of Having a Data Analysis Plan

By Dr. Jennifer Ann Morrow

Data Cleaning Step 2: Create a Data Analysis Plan

Hi again! For those that read my earlier blog on Data Cleaning Step 1: Create a Data Codebook, you know I love data cleaning! My colleagues, Dr. Louis Rocconi and Dr. Gary Skolits, love to nerd out and talk about data cleaning and why it is such an important part of analyzing your evaluation data. As I mentioned in my earlier blog post before we can tackle addressing our evaluation or assessment questions, we need to get our data organized. Creating a data analysis plan is an important part of the data management process. Once I create my first draft of my data codebook (Step 1), I draft a data analysis plan…and both of these get updated as I make changes to my evaluation/assessment dataset. 

Why a Data Analysis Plan?

While it can be tempting to just dive right on in and conduct your proposed analyses (I mean who doesn’t just want to run a multiple regression right away?!?) it’s good practice to have a detailed plan for how you intend to clean your data and how you will address your evaluation/assessment questions. Creating a data analysis plan BEFORE you start working with your dataset helps you think through the data that you need to collect to address your questions, what specific pieces of the data that you will use to address your questions, how you will analyze the data that you collect, and what are the most appropriate ways to disseminate the data that you analyze. While creating a data analysis plan can be time consuming, it is an invaluable part of the data management and analysis process. Also, if you are working with a team (as many of us evaluator/assessment professional do!) it makes collaboration, replication, and report generation easier. Just like the data codebook, the data analysis plan is a living document that changes as you make decisions and modifications to your dataset and planned analyses.  

I share the data analysis plan with my clients throughout the life of the project so they are aware of the process but also so they can chime in if they have questions or requests for different ways to approach the analysis of their data. At the end of my time with the project I routinely share a copy of the data codebook, data analysis plan, and a cleaned/sanitized dataset for the client to continue to use to inform their program and organization. 

What is in a Data Analysis Plan?

Whether you create your data analysis plan in Excel, Word, or some other software platform (I tend to prefer Word) these are my suggestions for what you should include in a data analysis plan: 

  • 1.) General Instructions to Data Analysts
  • 2.) List of Datasets for the Project
  • 3.) Who is Responsible for Each Section of the Analysis Plan
  • 4.) Evaluation/Assessment Questions
  • 5.) Variables that You Will Use in Your Analyses
  • 6.) Step by Step Description of Your Data Cleaning Process
  • 7.) Specific Analyses that You Will Use to Address Each Evaluation/Assessment Question
  • 8.) Proposed Data Visualizations that You Will Use for Each Analysis
  • 9.) Software Syntax/Code (e.g., SPSS, R) that You Will Use to Analyze Your Data

Since many times there are multiple people working with my datasets (Boy…did it take me a long time to get used to giving up control here!) including step by step instructions for how your data analysts should name, label, and save files is extremely important. Also providing guidance for how data analysts should document what they do (see project notebook in your data codebook!) and how they arrived at their decisions is invaluable for keeping the evaluation/assessment team aware of each step of the data analysis process. 

I typically organize my data analysis plan by first listing any data cleaning that needs to be completed followed by each of my evaluation/assessment questions. This way all of my analyses are organized by the questions that my client wants me to address…and this helps immensely when writing up my evaluation/assessment report for them.  

Including either the software syntax/code (if using something like SPSS or R) or the step-by-step approach to how you are using the software tool (if using something like Excel) to clean and analyze the data is so helpful to not only your team members but also your clients. It allows them to easily rerun analyses and critique the steps that you took to analyze the data. I also include in my syntax/code notes about my decision-making process so anyone can easily follow how and why I approached the analyses the way that I did. 

Additional Advice

While it is important to develop your data analysis plan early in your project always remember that it is a living document and it will definitely change as you are collecting data, meeting with your client to discuss the evaluation/assessment, and during the data cleaning process. Your “perfect” plan may not work once you have collected your data, so be flexible in your approach. Just remember to document any changes that you make to the plan and to your data in your project notebook! 

Resources

12 Steps of Data Cleaning Handout: https://www.dropbox.com/scl/fi/x2bf2t0q134p0cx4kvej0/TWELVE-STEPS-OF-DATA-CLEANING-BRIEF-HANDOUT-MORROW-2017.pdf?rlkey=lfrllz3zya83qzeny6ubwzvjj&dl=0 

http://fogartyfellows.org/wp-content/uploads/2015/09/SAP_workbook.pdf 

https://cghlewis.com/blog/project_beginning

https://learn.crenc.org/how-to-create-a-data-analysis-plan

https://pmc.ncbi.nlm.nih.gov/articles/PMC4552232/pdf/cjhp-68-311.pdf

https://the.datastory.guide/hc/en-us/articles/360003250516-Creating-Analysis-Plans-for-Surveys

https://www.slideshare.net/slideshow/brief-introduction-to-the-12-steps-of-evaluagio/26168236#1

https://www.surveymonkey.com/mp/developing-data-analysis-plan

https://youtu.be/105wwMySZYc?si=9SEqjP2HWB5k4MDn

https://youtu.be/djVHKjmImrw?si=BdfSxl6C4weZEOgD

Filed Under: Evaluation Methodology Blog

  • 1
  • 2
  • 3
  • …
  • 14
  • Next Page »

Educational Leadership and Policy Studies

325 Bailey Education Complex
Knoxville, Tennessee 37996

Phone: 865-974-2214
Fax: 865.974.6146

The University of Tennessee, Knoxville
Knoxville, Tennessee 37996
865-974-1000

The flagship campus of the University of Tennessee System and partner in the Tennessee Transfer Pathway.

ADA Privacy Safety Title IX