• Request Info
  • Visit
  • Apply
  • Give
  • Request Info
  • Visit
  • Apply
  • Give

Search

  • A-Z Index
  • Map

Educational Leadership and Policy Studies

  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Programs
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Programs
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
Home » Evaluation Methodology Blog » Page 3

How Do I Critically Consume Quantitative Research?

How Do I Critically Consume Quantitative Research?

How Do I Critically Consume Quantitative Research?

May 1, 2024 by Jonah Hall

How Do I Critically Consume Quantitative Research?

By Austin Boyd 

Every measurement, evaluation, statistics, and assessment (MESA) professional, whether they ​are​ established educators and practitioners or aspiring students, engages with academic literature in some capacity. Sometimes for work, ​​other times for pleasure, but always in the pursuit of new knowledge. But how do we as consumers of research determine whether the quantitative research we engage with is high quality? 

My name is Austin Boyd, and I am a researcher, instructor, and ESM alumni. ​​I have read my fair share of articles over the past decade and was fortunate enough to publish a few of my own. I have read articles in the ​​natural, formal, applied, and social sciences, and while they all shared the title of peer-reviewed publication, there was definitely variability in the quality of quantitative research from one article to the next. Initially, it was difficult for me to even consider the idea that a peer-reviewed publication would be anything less than perfect. However, as I have grown as a critical consumer of research, I have devised six questions to keep in mind when reading articles with quantitative analyses that allow me to remain objective in the face of exciting results. 

  1. ​​     ​What is the purpose of the article?  

The first question to keep in mind when reading an article is​,​ “​W​hat is its purpose?” Articles may state these in the form of research questions or even in the title by using words such as “empirical”, “validation”, and “meta-analysis”. While the purpose of an article has no bearing on its quality, it does impact the type of information a reader should expect to obtain from it. Do the research questions indicate that the article will be presenting new exploratory research on a new phenomenon or attempting to validate previous research findings? Remaining aware of the article’s purpose allows you to determine if the information is relevant and in the scope of what it should be providing. 

  1. What information is provided about obtaining participants and about the participants themselves? 

The backbone of quantitative research is data. In order to have any data, participants or cases must be found and measured for the phenomena of interest. These participants are all unique, and it is this uniqueness that needs to be disclosed to the reader. Information on the population of interest, how the selected participants were recruited, who they are, and why their results were or were not included in the analysis is essential for understanding the c​​ontext of the research. Beyond the article itself, the demographics of the participants are also important for planning future research. While research participants are largely Western, educated, industrialized, rich, and democratic societies (​​WEIRD; Henrich et al., 2010), it should not be assumed that this is the case for all research. The author(s) of an article should disclose demographic information of the participants, so the readers understand the context of the data and the generalizability of the results, and so that researchers can accurately replicate or expand the research to ne​​w contexts. 

  1. Do the analyses used make sense for the data and proposed research question(s)? 

In order to obtain results from the quantitative data collected, some form of analysis must be conducted. The most basic methods of exploring quantitative data are called statistics (Sheard, 2018). T​​he selected statistical analysis should align with the variables presented in the article and answer the research question(s) guiding the project. There is a level of subjectivity as to which statistical analysis should be used to analyze data. Variables measured on a nominal scale should not be used as the outcome variable when conducting analyses that look at the differences between group means, such as ​​t-tests and ANOVAs, while ratio scale variables should not be used to conduct analyses dealing with frequency distributions, such as chi-square tests. However, there are analyses which require the same variable types, making them seemingly interchangeable. For example, t-tests, logistic regressions, and point biserial analyses all use two variables, one continuous and one binary. However, each of these analyses addresses different research questions such as “Is there a difference between groups?”, “Can we predict an outcome?”, and “Is there a relationship between variables?”. While there is a level of subjectivity as to which statistical analysis can be used to analyze data, there are objectively incorrect analyses based on both the overarching research questions and the scale of measurement of the available variables in the data.  

  1. What results are provided? 

While a seemingly straightforward question, there is a lot of information that can be provided about a given analysis. The most basic, and least informative, is a blanket statement about the statistical significance. Even if there is no statistically significant result to report, a blanket statement is not sufficient information about the analyses with all the different values that can be reported for each analysis. For example, a t-test has a t value, degrees of freedom, p value, confidence interval, power level, and effect size, all of which provide valuable information about the results. While having some of these values does allow the reader to calculate the missing ones, the onus should not be put on the reader to do so (Cohen, 1990). Additionally, depending on the type of statistical analysis chosen, additional tests of the data must be conducted to determine if the data meets the assumptions necessary for the analysis. The results of these tests of assumptions and the decisions made based on them should be reported and supported by the existing literature. 

  1. Is there any discussion of limitations? 

Almost every article has limitations in some form or other, which should be made known to the reader. If an article didn’t have any limitations, the author would make a point to state as much. Limitations include limits to the generalizability of the findings, confounding variables, or simply time constraints. While these might seem negative, they are not immediate reasons to discredit an article entirely. As was the case for the demographics, the limitations provide further context about the research. They can even be useful in providing direction for follow-up studies in the same way a future research section would.  

  1. Do you find yourself still having questions after finishing the article?  

The final question to keep in mind once you have finished reading an article is “Do you still have questions?” At the end of an article, you shouldn’t find yourself needing more information about the study. You might want to know more about the topic or similar research, but you shouldn’t be left wondering about pieces of the research design or other methodological aspects of the study. High-quality research deserves an equally high-quality article, which includes ample information about every aspect of the study. 

While not an exhaustive list, these six questions are designed to provide a starting point for determining if research with quantitative data is of high quality. Not all research is peer-reviewed, including conference presentations, blog posts, and white papers, and simply being peer-reviewed does not make a publication infallible. It is important to understand how to critically consume research in order to successfully navigate the ever-expanding body of scientific research. 

​​​​     ​Additional Resources: 

https://blogs.lse.ac.uk/impactofsocialsciences/2016/05/09/how-to-read-and-understand-a-scientific-paper-a-guide-for-non-scientists/  

https://statmodeling.stat.columbia.edu/2021/06/16/wow-just-wow-if-you-think-psychological-science-as-bad-in-the-2010-2015-era-you-cant-imagine-how-bad-it-was-back-in-1999/ 

https://totalinternalreflectionblog.com/2018/05/21/check-the-technique-a-short-guide-to-critical-reading-of-scientific-papers/ 

https://undsci.berkeley.edu/understanding-science-101/how-science-works/scrutinizing-science-peer-review/ 

https://www.linkedin.com/pulse/critical-consumers-scientific-literature-researchers-patients-savitz/ 

​​     ​References: 

Cohen, J. (1990). Things I have learned (So Far). The American Psychologist, 45(12), 1304–1312. DOI: 10.1037/0003-066X.45.12.1304 

Henrich, J., Heine, S., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33 (2-3), 61-83 DOI: 10.1017/S0140525X0999152X 

Sheard, J. (2018). Chapter 18 – Quantitative data analysis. In K. Williamson & G. Johanson (Eds.), Research Methods (2nd ed., pp. 429-452). Chandos Publishing. DOI: 10.1016/B978-0-08-102220-7.00018-2   

Filed Under: Evaluation Methodology Blog

Engaging Students in Online, Asynchronous Courses: Strategies for Success

Engaging Students in Online, Asynchronous Courses: Strategies for Success

April 15, 2024 by Jonah Hall

Engaging Students in Online, Asynchronous Courses: Strategies for Success

By S. Nicole Jones, Ph.D. 

Hello! My name is Nicole Jones, and I am a 2022 graduate of the Evaluation, Statistics, and Methodology (ESM) PhD program at the University of Tennessee, Knoxville (UTK). I currently work as the Assessment & Accreditation Coordinator in the College of Veterinary Medicine (CVM) at the University of Georgia (UGA). I also teach online, asynchronous program evaluation classes for UTK’s Evaluation, Statistics, & Methodology PhD and Evaluation Methodology MS programs. My research interests include the use of artificial intelligence in evaluation and assessment, competency-based assessment, and outcomes assessment. 

Prior to teaching part-time for UTK, I served as a graduate teaching assistant in two online, synchronous ESM classes while enrolled in the PhD program: Educational Research and Survey Research. In addition, I taught in-person first-year seminars to undergraduates for many years in my previous academic advising roles. However, it wasn’t until I became involved in a teaching certificate program offered by UGA’s CVM this year that I truly began to reflect more on my own teaching style, and explore ways to better engage students, especially in an online, asynchronous environment. For those who are new to teaching online classes or just need some new ideas, I thought it would be helpful to share what I’ve learned about engaging students online.  

Online Learning 

While many online courses meet synchronously, meaning they meet virtually at a scheduled time through platforms like Zoom or other Learning Management Software (LMS) tools, there are also online classes that have no scheduled meeting times or live interactions. These classes are considered to be asynchronous. If you have taken an online, asynchronous course, you likely already know that it can be easy to forget about the class, primarily because there is no scheduled class time to keep you on track. When I worked as an academic advisor, I would often encourage my students who registered for these types of courses to go ahead and set aside certain days or times of the week to devote to those classes. Many college students struggle with time management, especially in the first-year, so this was one way to help them stay engaged in the class and up to date with assignments. While it is certainly important for students to show up (or log in) and participate, it’s even more important for instructors to create an online environment that will motivate students to do so. As discussed by Conrad and Donaldson (2012), online engagement is related to student participation and interaction in the classroom, and learning in the classroom (online or in-person) rests upon the instructor’s ability to create a sense of presence and engage students in the learning process. The key to engaging online learners is for students to be engaged and supported so they take responsibility for their own learning (Conrad & Donaldson, 2012). So, how might you create an engaging online environment for students?  

Engaging Students in Online Classes 

Below are some strategies I currently use to engage students in my online, asynchronous program evaluation classes:  

  • Reach out to the students prior to the start of class via welcome email 
  • Post information about myself via an introduction post – also have students introduce themselves via discussion posts 
  • Develop a communication plan – let students know the best way to get in touch with me 
  • Host weekly virtual office hours – poll students about their availability to find the best time 
  • Clearly organize the course content by weekly modules 
  • Create a weekly checklist and/or introduction to each module 
  • Use the course announcements feature to send out reminders of assignment due dates  
  • Connect course content to campus activities, workshops, events, etc.  
  • Utilize team-based projects 
  • Provide opportunities for students to reflect on learning (i.e., weekly reflection journals) 
  • Provide feedback on assignments in a timely manner 
  • Allow for flexibility and leniency  
  • Reach out to students who miss assignment due dates – offer to meet one-on-one if needed 

In addition to these strategies, the Center for Teaching and Learning at Northern Illinois University has an excellent website with even more recommendations for increasing student engagement in online courses. Their recommendations focus on the following areas: 1) set expectations and model engagement, 2) build engagement and motivation with course content and activities, 3) initiate interaction and create faculty presence, 4) foster interaction between students and create a learning community, and 5) create an inclusive environment. I also recommend checking your current institution’s Center for Teaching and Learning to see if they have tips or suggestions as they may be more specific for the LMS your institution uses. Lastly, you may find the following resources helpful if you wish to learn more about student engagement and online teaching and learning. 

Helpful Resources 

American Journal of Distance Education: https://www.tandfonline.com/toc/hajd20/current  

Fostering Connection in Hybrid & Online Formats:
https://www.ctl.uga.edu/_resources/documents/Fostering-Connection-in-Hybrid-Online-Formats.pdf  

Conrad, R. M., & Donaldson, J. A. (2012). Continuing to Engage the online Learner: More Activities and Resources for Creative Instruction. San Francisco, CA: Jossey Bass.  

Groccia, J. E. (2018). What is student engagement? New Directions for Teaching and Learning, 154, 11-20.  

How to Make Your Teaching More Engaging: Advice Guide 

https://www.chronicle.com/article/how-to-make-your-teaching-more-engaging/?utm_source=Iterable&utm_medium=email&utm_campaign=campaign_3030574_nl_Academe-Today_date_20211015&cid=at&source=ams&sourceid=&cid2=gen_login_refresh 

 How to Make Your Teaching More Inclusive:  

https://www.chronicle.com/article/how-to-make-your-teaching-more-inclusive/ 

Iowa State University Center for Excellence in Learning and Teaching: https://www.celt.iastate.edu/learning-technologies/engaging-students/ 

Khan, A., Egbue, O., Palkie, B., & Madden, J. (2017). Active learning: Engaging students to maximize learning in an online course. The Electronic Journal of e-Learning, 15(2), 107-115. 

Lumpkin, A. (2021). Online teaching: Pedagogical practices for engaging students synchronously and asynchronously. College Student Journal, 55(2), 195-207. 

Northern Illinois University Center for Teaching and Learning. (2024, March 1). Recommendations to Increase Student Engagement in Online Classes. https://www.niu.edu/citl/resources/guides/increase-student-engagement-in-online-courses.shtml.   

Online Learning Consortium: https://onlinelearningconsortium.org/read/olc-online-learning-journal/  

Watson, S., Sullivan, D. P., & Watson, K. (2023). Teaching presence in asynchronous online classes: It’s not just a façade. Online Learning, 27(2), 288-303. 

Filed Under: Evaluation Methodology Blog

Careers in Program Evaluation: Finding and Applying for a Job as a Program Evaluator

Careers in Program Evaluation: Finding and Applying for a Job as a Program Evaluator

April 1, 2024 by Jonah Hall

Careers in Program Evaluation: Finding and Applying for a Job as a Program Evaluator

By Jennifer Ann Morrow, Ph.D. 

Introduction: 

Hi! My name is Jennifer Ann Morrow and I’m an Associate Professor in Evaluation Statistics and Methodology at the University of Tennessee-Knoxville. I have been training emerging assessment and evaluation professionals for the past 22 years. My main research areas are training emerging assessment and evaluation professionals, higher education assessment and evaluation, and college student development. My favorite classes to teach are survey research, educational assessment, program evaluation, and statistics. 

What’s Out There for Program Evaluators? 

What kind of jobs are out there for program evaluators? What organizations hire program evaluators? Where should I start my job search? What should I submit with my job application? These are typical questions my students ask me as they are considering joining the evaluation job market. Searching for a job can be overwhelming and with so many resources and websites available it can be easy to get lost within all of the information when searching for a job. Here are some strategies that I share with my students as I help them navigate the program evaluation job market, I hope you find them helpful! 

First, I ask the student to describe their skills/competencies that they have and what types of skills they believe they are strong in (and hopefully enjoy doing!). In our program we use the American Evaluation Association Competencies (https://www.eval.org/About/Competencies-Standards/AEA-Evaluator-Competencies) in a self-assessment where we have students rate how confident they are in their ability to perform each competency. We have them rate themselves and provide strategies for how to remedy deficiencies each year that they are in our program. Conducting a self-assessment of your skills/competencies and strengths and weaknesses is a great way to help figure out what types of jobs best fit your skillset. It is also helpful when crafting a cover letter! Check out the resources for additional examples of self-assessments! 

Second, I have students create/update their curriculum vita (CV) and resume. Depending on the jobs that they plan on applying for they may need a CV or a resume. I tell them to use the information from their skills self-assessment and their graduate program of study to craft their CV/resume. I also have them develop a general cover letter (these should be tailored for each specific job) that showcases their experience, skills, and relevant work products. There are a ton of resources available online (see some listed below) and I share with them example CVs/resumes and cover letters from some of our graduates. I also encourage them to get feedback on these from faculty and peers before using them in a job application. 

Third, I encourage students to develop (or clean up current ones) a social media presence. I highly recommend creating a LinkedIn profile (My LinkedIn Profile). Make sure on your profile that you showcase your skills, education, experiences and make connections with others in the Program Evaluation field. LinkedIn is also a great place to search for evaluation jobs! I also recommend to students to create an academic website (Dr. Rocconi’s Website). On your website you go into more detail about your experiences, share work products (e.g., publications, presentations, evaluation reports). Make sure you put your LinkedIn and website links at the top of your CV/resume! 

Fourth, I provide my students tips for where and how to search for program evaluation jobs. I encourage them to draft relevant search terms (e.g., program evaluator, evaluation specialist, program analyst, data analyst) and make a list of job sites (see resources for some of my favorites!) that you are going to use to search for jobs. For a lot of these job sites you can search for key terms, job title, location, salary, etc. to help narrow down the results. Also, for many of these job sites you can sign up for job alerts based on your search terms where they will send you an email when a new job fits your search terms. I also encourage students to join their major professional organizations (e.g., AEA) and sign up for their newsletter or listserv as many job opportunities are posted there. 

Lastly, I tell students to create an organized job search plan. I typically do this in Excel but you can organize your information in a variety of formats and platforms. I create an Excel file that contains all of the jobs that I apply for (i.e., name of organization, link to job ad, contact information, date applied) and a list of when/where I am searching for job. When I was actively searching for jobs I dedicated time each week to go through listserv emails and search job sites for relevant jobs to apply for. I then updated my excel file each week during my search. It helps to keep things organized in case you need to follow-up with organizations regarding the status of your application. 

So, good luck on your job search and I hope that my tips and resources are helpful as you start your journey to becoming a program evaluator! 

 

Resources 

American Evaluation Association Competencies: https://www.eval.org/About/Competencies-Standards/AEA-Evaluator-Competencies  

Article about How to Become a Program Evaluator: https://www.evalcommunity.com/careers/program-evaluator/ 

Article about Program Evaluation Careers: https://money.usnews.com/money/careers/articles/2008/12/11/best-kept-secret-career-program-evaluator 

Article about Program Evaluation Jobs: https://www.evalcommunity.com/job-search/program-evaluation-jobs/ 

Creating a LinkedIn Profile: https://blog.hubspot.com/marketing/linkedin-profile-perfection-cheat-sheet  

Creating an Academic Website: https://theacademicdesigner.com/2023/how-to-make-an-academic-website/  

Evaluator Competencies Assessment: https://www.khulisa.com/wp-content/uploads/sites/9/2021/02/2020-Evaluator-Competencies-Assessment-Tool-ECAT_Final_2020.07.27.pdf  

Evaluator Qualities: https://www.betterevaluation.org/frameworks-guides/managers-guide-evaluation/scope-evaluation/determine-evaluator-qualities 

Evaluator Self-Assessment: https://www.cdc.gov/evaluation/tools/self_assessment/evaluatorselfassessment.pdf  

Program Evaluation Curriculum Vita Tips: https://wmich.edu/sites/default/files/attachments/u1158/2021/Showcasing%20Your%20Eval%20Competencies%20in%20Your%20Resume%20or%20Vita%20for%20PDF.pdf  

Program Evaluation Resume Tips: https://www.zippia.com/program-evaluator-jobs/skills/#  

Resume and CVs Resources: https://www.careereducation.columbia.edu/topics/resumes-cvs  

Resume and Job Application Resources: https://academicguides.waldenu.edu/careerservicescenter/resumesandmore  

Six C’s of a Good Evaluator: https://www.evalacademy.com/articles/2019/9/26/what-makes-a-good-evaluator  

UTK’s Evaluation Methodology MS program (distance ed): https://volsonline.utk.edu/programs-degrees/education-evaluation-methodology-ms/ 

AAPOR Jobs: https://jobs.aapor.org/jobs/?append=1&quick=industry%7Csurvey&jas=3 

American Evaluation Association Job Bank: https://careers.eval.org/ 

Evaluation Jobs: https://evaluationjobs.org/ 

Higher Ed Jobs: https://www.higheredjobs.com/ 

Indeed.com: https://www.indeed.com/ 

Monitoring and Evaluation Career Website: https://www.evalcommunity.com/ 

NCME Career Center: https://www.ncme.org/community/community-network2/careercenter 

USA Government Job Website: https://www.usajobs.gov/ 

 

Filed Under: Evaluation Methodology Blog

How My Dissertation Came to be through ESM’s Support and Guidance

How My Dissertation Came to be through ESM’s Support and Guidance

March 15, 2024 by Jonah Hall

How My Dissertation Came to be through ESM’s Support and Guidance

By Meredith Massey, Ph.D. 

Who I am

Greetings! I’m Dr. Meredith Massey. I finished my PhD in Evaluation, Statistics, and Methodology (ESM) at UTK in the Fall of 2023. In addition to my PhD in ESM, I also completed graduate certificates in Women, Gender, and Sexuality and Qualitative Research Methods in Education. While I was a part-time graduate student, I also worked full-time as an evaluation associate at Synergy Evaluation Institute, a university-based evaluation center. By day, I worked for clients evaluating their STEM education and outreach programs. By night, I was an emerging scholar in ESM. During my time in the program,      my research interests grew to include andragogical issues in applied research methods courses, classroom measurement and assessment, feminist research methods, and evaluation.

How my dissertation came to be

In the ESM program, students can choose to complete a three-manuscript dissertation rather than a traditional five-chapter dissertation. When it came time to start deciding what my dissertation would look like, my faculty advisor, Dr. Leia Cain, suggested I consider the three-manuscript option. As someone who has varied interests, this idea appealed to me because it allowed me the flexibility to work on three separate but related studies. My dissertation flowed from a research internship that I completed with Dr. Cain. I interviewed qualitative faculty about their assessment beliefs and practices within their qualitative methods courses. I wrote a journal article on that study to serve as my comprehensive exam writing requirement. Using my original internship study as the basis for my first dissertation manuscript was an expedient strategy as it allowed me to structure my second and third manuscripts on the findings of my first study. I presented my ideas for my second and third manuscripts to my committee in my dissertation proposal, accepted their feedback on how to proceed with my studies and then got to work.

Dissertation topic and results

In my multi-paper dissertation entitled “Interviews, rubrics and stories (Oh my!): My journey through a three-manuscript dissertation,” I chose to center faculty and students’ perspectives on assessment and learning. To that end, my first and second research studies both focused on those two issues, while the third paper went further into exploring the students’ perspective through my story of the parallel formations of my scholarly identity and my new identity as a part of a married couple. In the first study, “Interviewing the Interviewers: How qualitative faculty assess interviews,” I reported how faculty use interview assignments in their courses and how they engage with assessment tools such as rubrics for those interview assignments. We learned that the faculty view interview assignments as the best and most comprehensive assignment their students can complete to give them experience as qualitative researchers. Regarding assessment tools such as rubrics, while instructors had differing opinions on whether rubrics were an appropriate tool to use in their assessment practices, all the instructors believed that giving students feedback was an essential assessment practice. My findings in that manuscript helped shape the plan to implement the second study. In “I can do qualitative research: Using student-designed rubrics to teach interviewing,” I detailed testing out an innovative student-created rubric for an interview assignment in an introductory qualitative research methods course and used student reflections as the basis for writing an ethnodrama about how students experience their first interview assignments and how they engaged with their rubric. From this study, we learned that students grew in their confidence in conducting interviews, experienced a transformation in their paradigm, and were conflicted about using the student-designed rubric in that some students found it useful, and some did not. Both manuscripts informed my third manuscript, an autoethnography detailing the parallel transitions in my identity from an evaluator to a scholar and my identity from a single person to a married person. I wrote interweaving stories chronicling the parallels between the similar and contrasting methods I use as an evaluator and researcher, how this tied into my growing identity as a scholar, and the similarities and contrasts of how I’ve noticed my identity has been changing throughout my engagement and being newly married to my longtime boyfriend, now husband. These studies contributed valuable knowledge to the existing, though limited, andragogical literature on qualitative research methods. My hope going forward is that qualitative faculty continue this focus, beginning conversations about their classroom assessments to complete their own andragogical studies determining the impact of their teaching on their students’ learning.

What’s next?

            Now that I’m finished with my dissertation and my studies, I am happy to report that I have accepted a promotion at my job at Synergy Evaluation Institute, and I’ve also been given the opportunity to teach qualitative research methods courses as an adjunct in the ESM program. I’m excited to continue being associated with the program and teach future ESM students. Being in the ESM program at UTK, while difficult at times, has also been a joy. The ESM program encouraged me to explore my varied interests and ultimately supported me as I grew professionally as an evaluator and scholar. The program accommodated and respected me as a working professional, and I highly recommend the program to any student with an interest in working with data as an evaluator, assessment professional, statistician, qualitative researcher, faculty, or all of the above. There’s a place for all in ESM.

Resources

Journal article citation

Massey, M.C., & Cain, L.K. (In press). Interviewing the interviewers: How qualitative faculty assess interviews. The Qualitative Report.

Books specifically about Qualitative Research Methods Andragogy

Eisenhart, M., & Jurow, A. S. (2011). Teaching qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (4th ed., pp. 699-714). Sage.

Hurworth, R. E. (2008). Teaching qualitative research: Cases and issues. Sense Publishers.

Swaminathan, R., & Mulvihill, T. M. (2018). Teaching qualitative research: Strategies for engaging emerging scholars. Guilford Publications.

Books to read to become familiar with Ethnodrama as a method

Leavy, P. (2015). Handbook of Arts-Based Research (2nd ed.). The Guilford Press.

Leavy, P. (2018). Handbook of Arts-Based Research (3rd ed.). The Guilford Press.

Saldana, J. (2016.) Ethnotheatre: Research from page to stage. Routledge. http://doi.org/10.4324/9781315428932

Most useful citations to become familiar with autoethnography as a method

Cooper, R., & Lilyea, B. V. (2022). I’m Interested in Autoethnography, but How Do I Do It?. The Qualitative Report, 27(1), 197-208. https://doi.org/10.46743/2160-3715/2022.5288

Ellis, C. (2004). The Ethnographic I: a methodological novel about
autoethnography
. AltaMira.

Ellis, C. (2013). Carrying the torch for autoethnography. In S. H. Jones, T. E. Adams., and C. Ellis (eds.) Handbook of Autoethnography (pp. 9-12). Left Coast Press.

Filed Under: Evaluation Methodology Blog

Introducing the Evaluation Methodology MS Program at UTK!

Introducing the Evaluation Methodology MS Program at UTK!

March 1, 2024 by Jonah Hall

Introducing the Evaluation Methodology MS Program at UTK!

By Dr. Jennifer Ann Morrow 

Hi everyone! My name is Dr. Jennifer Ann Morrow, and I’m the program coordinator for the University of Tennessee at Knoxville’s new distance education master’s program in Evaluation Methodology. I’m happy to announce that we are currently taking applications for our first cohort that will start in Fall 2024. In a world driven by data, the EM Master’s program gives you the skills to make evidence-based decisions!  

So Why Should You Join Our Program? 

Fully Online Program 

Our new program is designed for the working professional, all courses are fully online and asynchronous which enables students to complete assignments at times convenient for them. Although our courses are asynchronous our faculty offer optional weekly synchronous student hours/help sessions to offer additional assistance and mentorship. Students also participate in both group and individual advising sessions each semester where students will receive mentorship, practical experience suggestions, and career exploration guidance.  

Applied Coursework 

Our 30-credit program is designed to be completed in just under 2 years (5 semesters, only 2 courses per semester!). Each class is designed to include hands-on applied experiences on the entire program evaluation process such as evaluation design, data collection, data analysis, and data dissemination. In their first-year, students will take a two-semester program evaluation course sequence, statistics 1, introduction to qualitative research 1, evaluation designs and data collection methods, and an elective. In their second-year students will take survey research, dissemination evaluation results, and a two-semester evaluation practicum course sequence where they will finalize a portfolio of their evaluation experiences to fulfil their comprehensive exam requirements. If students are unable to take 6 credits a semester, they have up to 6 years to complete their degree if they want to go at a slower pace.  

Experienced Faculty 

Our faculty are experienced educators! All faculty work as evaluators or in a related job such as assessment professional, applied researcher, or psychometrician. They are dedicated faculty that understand what skills and competencies are needed in the evaluation field and ensure that these are focused on within their classes. All are actively involved in their professional organizations (e.g., American Evaluation Association, American Psychological Association, Association for the Assessment of Learning in Higher Education, Association for Institutional Research) and publish their scholarly work in peer-reviewed journals.  

How to Apply 

It’s easy to apply! Go to the UTK Graduate Admissions Portal (https://apply.gradschool.utk.edu/apply/) and fill out your application. You need 2-3 letters of recommendation (complete the contact information and UTK will reach out to them to complete a recommendation), college transcripts, a goals statement (a letter introducing yourself and why you want to join our program) and submit your application fee. No GRE scores are needed! Applications are due by July 1st of each year (though we will review them early if you submit before then!). Tuition is $700 per graduate credit ($775 for out of state). 

 

Contact Me for More Information 

If you have any questions about our program just reach out! 

 

Jennifer Ann Morrow Ph.D.
jamorrow@utk.edu
(865)-974-6117
https://cehhs.utk.edu/elps/people/jennifer-ann-morrow-phd/

Helpful Resources 

Evaluation Methodology Program Website: https://cehhs.utk.edu/elps/evaluation-methodology-ms/  

Evaluation Methodology Program VOLS Online Website: https://volsonline.utk.edu/programs-degrees/education-evaluation-methodology-ms/  

Evaluation Methodology Program Student Handbook: https://cehhs.utk.edu/elps/wp-content/uploads/sites/9/2023/11/EM-MASTERS-HANDBOOK-2023.pdf  

UTK Educational Leadership and Policy Studies Website: https://cehhs.utk.edu/elps/  

UTK Educational Leadership and Policy Studies Facebook Page: https://www.facebook.com/utkelps/?ref=embed_page  

UTK Graduate School Admissions Website: https://gradschool.utk.edu/future-students/office-of-graduate-admissions/applying-to-graduate-school/  

UTK Graduate School Admission Requirements: https://gradschool.utk.edu/future-students/office-of-graduate-admissions/applying-to-graduate-school/admission-requirements/  

UTK Graduate School Application Portal: https://apply.gradschool.utk.edu/apply/  

UTK Distance Education Graduate Fees: https://onestop.utk.edu/wp-content/uploads/sites/9/sites/63/2023/11/Spring-24-GRAD_Online.pdf  

UTK Graduate Student Orientations: https://gradschool.utk.edu/future-students/graduate-student-orientations/  

American Evaluation Association: https://www.eval.org/ 

AEA Graduate Student and New Evaluator TIG: https://www.facebook.com/groups/gsnetig/ 

Filed Under: Evaluation Methodology Blog

Evaluation Capacity Building: What is it, and is a Job Doing it a Good Fit for Me?

Evaluation Capacity Building: What is it, and is a Job Doing it a Good Fit for Me?

February 15, 2024 by Jonah Hall

Evaluation Capacity Building: What is it, and is a Job Doing it a Good Fit for Me?

By Dr. Brenna Butler

Hi, I’m Dr. Brenna Butler, and I’m currently an Evaluation Specialist at Penn State Extension (https://extension.psu.edu/brenna-butler). I graduated from the ESM Ph.D. program in May 2021, and in my current role, a large portion of my job involves evaluation capacity building (ECB) within Penn State Extension. What does ECB specifically look like day-to-day, and is ECB a component of a job that would be a good fit for you? This blog post will cover some of my thoughts and opinions of what ECB may look like in a job in general. Keep in mind that these opinions are exclusively mine, and don’t represent those of my employer.

Evaluation capacity building (ECB) is the process of increasing the knowledge, skills, and abilities of individuals in an organization to conduct quality evaluations. This is often done by evaluators (like me!) providing the tools and information for individuals to conduct sustained evaluative practices within their organization (Sarti et al., 2017). The amount of literature covering ECB is on the rise (Bourgeois et al., 2023), indicating that evaluators taking on ECB roles within organizations may also be increasing. Although there are formal models and frameworks in the literature that describe ECB work within organizations (the article by Bourgeois and colleagues (2023) provides an excellent overview of these), I will cover three specific qualities of what it takes to be involved in ECB in an organization.

ECB Involves Teaching

Much of my role at Penn State Extension is providing mentorship to Extension Educators on how to incorporate evaluation in their educational programming. This mentorship role sometimes looks like a more formal teaching role by conducting webinars and training on topics such as writing good survey questions or developing a logic model. Other times, this mentorship role will take a more informal teaching route when I am answering questions Extension Educators email me regarding data analysis or ways to enhance their data visualizations for a presentation. Enjoying teaching and assisting others in all aspects of evaluations are key qualities of an effective evaluator who leads ECB in an organization.

ECB Involves Leading

Taking on an ECB role involves a large amount of providing guidance and being the go-to expert on evaluation within the organization. Individuals will often look to the evaluator in these positions as to what directions to take in evaluation and assessment projects. This requires speaking up in meetings to advocate for strong evaluative practices (“Let’s maybe not send out a 30-question survey where every single question is open-ended”). Being willing to speak up and go against the norms of “how the organization has always done something” is an area that an evaluator involved in ECB work needs to be comfortable doing.

One way this “we’ve always done it this way” mentality can be tackled by evaluators is through an evaluation community of practice. Each meeting is held around a different evaluation topic area where members of the organization are invited to talk about what has worked well for them and what hasn’t in that area and showcase some of the work they have conducted through collaboration with the evaluator. The intention is that these community of practice meetings that are open to the entire organization can be one way of moving forward with adopting evaluation best practices and leaning less on old habits.

ECB Involves Being Okay with “Messiness”

An organization may invest in hiring an evaluation specialist who can guide the group to better evaluative practices because they lack an expert in evaluation. If this is the case, evaluation plans may not exist, and your role as an evaluator in the organization will be to start from scratch in developing evaluative processes. Alternatively, it could be that evaluations have been occurring in the organization but may not be following best practices, and you will be tasked with leading the efforts to improve these practices.

Work in this scenario can become “messy” in the sense that tracking down historical evaluation data collected before an evaluator was guiding these efforts in the organization can become very difficult. For example, there may not be a centralized location or method to how paper survey data were being stored. One version of the data may involve tally marks on a sheet of paper indicating the number of responses to each question, and another version of the same survey data may be stored in an Excel file with unlabeled rows. These scenarios require adequate discernment by the evaluator if the historical data are worth combing through and combining so that they can be analyzed, or if starting from scratch and collecting new data will ultimately save time and effort. Being part of ECB in an organization involves being up for the challenge of working through these “messy”, complex scenarios.

Hopefully, this provided a brief overview of some of the work done by evaluators in ECB within organizations and can help you discern if a position involving ECB may be in your future (or not!).

 

Links to Explore for More Information on ECB

https://www.betterevaluation.org/frameworks-guides/rainbow-framework/manage/strengthen-evaluation-capacity

https://www.oecd.org/dac/evaluation/evaluatingcapacitydevelopment.htm

http://www.pointk.org/client_docs/tear_sheet_ecb-innovation_network.pdf

https://wmich.edu/sites/default/files/attachments/u350/2014/organiziationevalcapacity.pdf

https://scholarsjunction.msstate.edu/cgi/viewcontent.cgi?article=1272&context=jhse

 

References

Bourgeois, I., Lemire, S. T., Fierro, L. A., Castleman, A. M., & Cho, M. (2023). Laying a solid foundation for the next generation of evaluation capacity building: Findings from an integrative review. American Journal of Evaluation, 44(1), 29-49. https://doi.org/10.1177/10982140221106991

Sarti, A. J., Sutherland, S., Landriault, A., DesRosier, K., Brien, S., & Cardinal, P. (2017). Understanding of evaluation capacity building in practice: A case study of a national medical education organization. Advances in Medical Education and Practice, 761-767. https://doi.org/10.2147/AMEP.S141886

Filed Under: Evaluation Methodology Blog

Timing is Everything… Or Is It? How Do We Incentivize Survey Participation?

Timing is Everything… Or Is It? How Do We Incentivize Survey Participation?

February 1, 2024 by Jonah Hall

Timing is Everything… Or Is It? How Do We Incentivize Survey Participation?

By M. Andrew Young

Hello! My name is M. Andrew Young. I am a second-year Ph.D. student in the Evaluation, Statistics, and Methodology Ph.D. program here at UT-Knoxville. I currently work in higher education assessment as a Director of Assessment at East Tennessee State University’s college of Pharmacy.

Let me tell you a story; and you are the main character!

4:18pm Friday Afternoon:

Aaaaaand *send*.

You put the finishing touches on your email. You’ve had a busy, but productive day. Your phone buzzes. You reach down to the desk and turn on the screen to see a message from your friends you haven’t seen in a while.

Tonight still good?

“Oh no! I forgot!” You tell yourself as you flop back in your office chair. “I was supposed to bring some drinks and a snack to their house tonight.”

As it stands – you have nothing.

You look down at your phone while you recline in your office chair, searching for “grocery stores near me.” You find the nearest result and bookmark it for later. You have a lot left to do, and right now, you can’t be bothered.

Yes! I am super excited! When is everyone arriving? You type hurriedly in your messaging app and hit send.

You can’t really focus on anything else. One minute passes by and your phone lights up again with the notification of a received text message.

Everyone is getting here around 6. See you soon!

Thanks! Looking forward to it!

You lay your phone down and dive back into your work.

4:53pm:

Work is finally wrapped up. You pack your laptop into your backpack, grab a stack of papers, joggle them on your desk to get them at least a little orderly before you jam them in the backpack. You shut your door and rush to your vehicle. You start your car, navigate to the grocery store you bookmarked earlier.

“17 minutes to your destination,” your GPS says.

5:12pm:

It took two extra minutes to arrive because, as usual, you caught the stoplights on the wrong rotation. You finally find a parking spot, shuffle out of your car and head toward the entrance.

You freeze for a moment. You see them.

You’ve seen them many times, and you always try to avoid them. You know there is going to be the awkward interaction of a greeting, a request of some sort; usually for money. Your best course of action is to ignore them. Everyone knows that you hear them, but it is a small price to pay in your hurry.

Sure enough, “Hello! Can you take three minutes of your time to answer a survey? We’ll give you a free t-shirt for your time!”

You shoot them a half smile and a glance as you pick up your pace and rush past the pop-up canopy and table stacked with items you barely pay attention to as you pass.

Shopping takes longer than you’d hoped. The lines are long at this time of day. You don’t have much, just an armful of goods, but no matter, you must wait your turn. Soon, you make your way out of the store to be unceremoniously accosted again.

5:32pm:

You have to drive across town. Now, you won’t even have enough time to go home and change before your dinner engagement. You rush towards the door. The sliding doors part as you pass through the entrance, right by them.

“Please! If you will take three minutes, we will give you a T-shirt. We really want your opinion on an important matter in your community!”

You gesture with your hand and explain, “I’m sorry, but I’m in a terrible rush!”

——————————————————————————————–

So, what went wrong for the survey researchers? Why didn’t you answer the survey? They were at the same place at the same time as you. They offered you an incentive to participate. They specified that it was only going to take three minutes of your time to complete. So, why did you brush them off as you have many other charities and survey givers in the past situated in front of your store of choice?

Oftentimes, we are asked for our input, or our charity, but before we even receive the first invitation, we have already determined that we will not participate. Why? In this scenario, you were in a hurry. The incentive they were offering was not motivating to you.

Would it have changed your willingness to participate if they offered a $10 gift card to the store you were visiting? Maybe, maybe not.

The question is, more and more, how do we incentivize participation in a survey? Paper, online, person-to-person. They are all suffering by the conundrum of falling response rates (Lindgren et al., 2020). This impacts the validity of your research study. How can you ensure that you are getting heterogeneous sampling from populations? How can you be sure that you are getting the data you need from the people you want to sample? This can be a challenge.

In recent published works on survey incentives, many studies acknowledge that time and place affects participation, but we don’t quite understand how. Some studies, such as Lindgren et al. (2020), have tried to determine the time of day and day of week to invite survey participants, but they themselves discuss the limitations in their study, which is endemic to many studies, which is the lack of heterogeneity of participants and the interplay of response and nonresponse bias:

While previous studies provide important empirical insights into the largely understudied role of timing effects in web surveys, there are several reasons why more research on this topic is needed. First, the results from previous studies are inconclusive regarding whether the timing of the invitation e-mails matter in web survey modes (Lewis & Hess, 2017, p. 354). Secondly, existing studies on timing effects in web surveys have mainly been conducted in an American context, with individuals from specific job sectors (where at least some can be suspected to work irregular hours and have continuous access to the Internet). This makes research in other contexts than the American, and with more diverse samples of individuals, warranted (Lewis & Hess, 2017, p. 361; Sauermann & Roach, 2013, p. 284). Thirdly, only the Lewis and Hess (2017), Sauermann and Roach (2013), and Zheng (2011) studies are recent enough to provide dependable information to today’s web survey practitioners, due to the significant, and rapid changes in online behavior the past decades. (p. 228)

Timing, place/environment, and matching the incentive to the situation and participant (and maybe even topic, if possible) is influential in improving response rates. Best practices indicate that pilot testing survey items can help create a better survey, but how about finding what motivates your target population to even agree to begin the survey in the first place? That is less explored, and I think is an opportunity for further study.

This gets even harder when you are trying to reach hard-to-reach populations. Many times, it takes a variety of approaches, but what is less understood, is how to decide on your initial approach. The challenge that other studies have run into, and something that I think will continue to present itself as a hurdle is this: because of the lack of research on timing and location, and because of the lack of heterogeneity in the studies that do exist, the generalizability of studies is limited, if not altogether impractical. So, that leads me full-circle back to pilot-testing incentives and timing for surveys. Get to know your audience!

Cool Citations to Read:

Guillory, J., Wiant, K. F., Farrelly, M., Fiacco, L., Alam, I., Hoffman, L., Crankshaw, E., Delahanty, J., & Alexander, T. N. (2018). Recruiting Hard-to-Reach Populations for Survey Research: Using Facebook and Instagram Advertisements and In-Person Intercept in LGBT Bars and Nightclubs to Recruit LGBT Young Adults. J Med Internet Res, 20(6), e197. https://doi.org/10.2196/jmir.9461

Lindgren, E., Markstedt, E., Martinsson, J., & Andreasson, M. (2020). Invitation Timing and Participation Rates in Online Panels: Findings From Two Survey Experiments. Social Science Computer Review, 38(2), 225–244. https://doi.org/10.1177/0894439318810387

Robinson, S. B., & Leonard, K. F. (2018). Designing Quality Survey Questions. SAGE Publications, Inc. [This is our required book in Survey Research!]

Smith, E., Loftin, R., Murphy-Hill, E., Bird, C., & Zimmermann, T. (2013). Improving developer participation rates in surveys. 2013 6th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE), 89–92. https://doi.org/10.1109/CHASE.2013.6614738

Smith, N. A., Sabat, I. E., Martinez, L. R., Weaver, K., & Xu, S. (2015). A Convenient Solution: Using MTurk To Sample From Hard-To-Reach Populations. Industrial and Organizational Psychology, 8(2), 220–228. https://doi.org/10.1017/iop.2015.29

Neat Websites to Peek At:

https://blog.hubspot.com/service/best-time-send-survey (limitations, again, no demographics understanding, they did say to not send it in high-volume work times, but not everyone works the same type of M-F 8:00am-4:30pm job)

https://globalhealthsciences.ucsf.edu/sites/globalhealthsciences.ucsf.edu/files/tls-res-guide-2nd-edition.pdf (this is targeted directly towards certain segments of hard-to-reach populations. Again, generalizability challenges, but the idea is there)

Filed Under: Evaluation Methodology Blog

Making the Most of Your Survey Items: Item Analysis

Making the Most of Your Survey Items: Item Analysis

January 15, 2024 by Jonah Hall

Making the Most of Your Survey Items: Item Analysis

By Louis Rocconi, Ph.D. 

Hi, blog world! My name is Louis Rocconi, and I am an Associate Professor and Program Coordinator in the Evaluation, Statistics, and Methodology program at The University of Tennessee, and I am MAD about item analysis. In this blog post, I want to discuss an often overlooked tool to examine and improve survey items: Item Analysis.

What is Item Analysis?

Item analysis is a set of techniques used to evaluate the quality and usefulness of test or survey items. While item analysis techniques are frequently used in test construction, these techniques are helpful when designing surveys as well. Item analysis focuses on individual items rather than the entire set of items (such as Cronbach’s alpha). Item analysis techniques can be used to identify how individuals respond to items and how well items discriminate between those with high and low scores. Item analysis can be used during pilot testing to help choose the best items for inclusion in the final set. While there are many methods for conducting item analysis, this post will focus on two methods: item difficulty/endorsability and item discrimination.

Item Difficulty/Endorsability

Item difficulty, or item endorsability, is simply the mean, or average, response (Meyer, 2014). For test items that have a “correct” response, we use the term item difficulty, which refers to the proportion of individuals who answered the item correctly. However, when using surveys with Likert-type response options (e.g., strongly disagree, disagree, agree, strongly agree), where there is no “correct” answer, we can think of the item mean as item endorsability or the extent to which the highest response option is endorsed. We often divide the mean, or average response, by the maximum possible response to put endorsability on the same scale as difficulty (i.e., ranging from 0 to 1).

A high difficulty (i.e., close to 1) indicates an item that is too easy, while a low difficulty value (i.e., close to 0) suggests an overly difficult item or an item that few respondents endorse. Typically, we are looking for difficulty values between 0.3 and 0.7. Allen and Yen (1979) argue this range maximizes the information a test provides about differences among respondents. While Allen and Yen were referring to test items, surveys with Likert-type response options generally follow the same recommendations. An item with a low endorsability indicates that people are having a difficult time endorsing the item or selecting higher response options such as strongly agree. Whereas, an item with a high endorsability indicates that the item is easy to endorse. Very high or very low values for difficulty/endorsability may indicate that we need to review the item. Examining proportions for each response option is also useful. It demonstrates how frequently a response category was used. If a response category is not used or only selected by a few respondents, this may indicate that the item is ambiguous or confusing.

Item Discrimination

Item discrimination is a measure of the relationship between scores on an item and the overall score on the construct the survey is measuring (Meyer, 2014). It measures the degree to which an item differentiates individuals who score high on the survey from those who score low on the survey. It aids in determining whether an item is positively or negatively correlated with the total performance. We can think of item discrimination as how well an item is tapping into the latent construct. Discrimination is typically measured using an item-total correlation to assess the relationship between an item and the overall score. Pearson’s correlation and its variants (i.e., point-biserial correlation) are the most common, but other types of correlations such as biserial and polychoric correlations can be used.

Meyer (2014) suggests selecting items with positive discrimination values between 0.3 and 0.7 and items that have large variances. When the item-total correlation exceeds 0.7, it suggests the item may be redundant. A content analysis or expert review panel could be used to help decide which items to keep. A negative discrimination for an item suggests that the item is negatively related with the total score. This may suggest a data entry error, a poorly written item, or that the item needs to be reverse coded. Whatever the case, negative discrimination is a flag to let you know to inspect that item. Items with low discrimination tap into the construct poorly and should be revised or eliminated. Very easy or very difficult items can also cause low discrimination, so it is good to check whether that is a reason as well. Examining discrimination coefficients for each response option is also helpful. We typically want to see a pattern where lower response options (e.g., strongly disagree, disagree) have negative discrimination coefficients and higher response options (e.g., agree, strongly agree) have positive correlations and the magnitude of the correlations is highest at the ends of the response scale (we would look for the opposite pattern if the item is negatively worded).

Conclusion

Item difficulty/endorsability and item discrimination are two easy techniques researcher can use to help improve the quality of their survey items. These techniques can easily be implemented when examining other statistics such as internal consistency reliability.

___________________________________________________________________

References

Allen, M. & Yen, W. (1979). Introduction to measurement theory. Wadsworth.

Meyer, J. P. (2014). Applied measurement with jMetrik. Routledge.

Resources

I have created some R code and output to demonstrate how to implement and interpret an item analysis.

The Standards for Educational and Psychological Testing

Filed Under: Evaluation Methodology Blog

Are Evaluation PhD Programs Offering Training in Qualitative and Mixed Design Methodologies

Are Evaluation PhD Programs Offering Training in Qualitative and Mixed Design Methodologies

January 1, 2024 by Jonah Hall

Are Evaluation PhD Programs Offering Training in Qualitative and Mixed Design Methodologies

By Kiley Compton

Hello! My name is Kiley Compton and I am a fourth-year doctoral student in UT’s Evaluation, Statistics, and Methodology (ESM) program. My research interests include program evaluation, research administration, and sponsored research metrics.  

One of the research projects I worked on as part of the ESM program examined curriculum requirements in educational evaluation, assessment, and research (EAR) doctoral programs.  Our team was comprised of first- and second-year ESM doctoral students with diverse backgrounds, research interests, and skill sets.  

An overwhelming amount of preliminary data forced us to reconsider the scope of the project. The broad focus of the study was not manageable, so we narrowed the scope and focused on the prevalence of mixed method and qualitative research methodology courses offered in U.S. PhD programs.  Experts in the field of evaluation encourage the use of qualitative and mixed method approaches to gain an in-depth understanding of the program, process, or policy being evaluated (Bamberger, 2015; Patton, 2014).  The American Evaluation Association developed a series of competencies to inform evaluation education and training standards, which includes competency in “quantitative, qualitative, and mixed designs” methodologies (AEA, 2018). Similarly, Skolits et al. (2009) advocate for professional training content that reflects the complexity of evaluations.  

This study was guided by the following research question: what is the prevalence of qualitative and mixed methods courses in Educational Assessment, Evaluation, and Research PhD programs? Sub-questions include 1) to what extent are the courses required, elective, or optional? and 2) to what extent are these courses offered at more advanced levels? For the purpose of this study, elective courses are those that fulfill a specific, focused requirement, while optional courses are those that are offered but do not fulfill elective requirements.  

Methods 

This study focused on PhD programs similar to UT’s ESM program. PhD programs from public and private institutions were selected based on the U.S. Department of Education’s National Center for Education Statistics (NCES) Classification of Instructional Programs (CIP) assignment. Programs under the 13.06 “Educational Assessment, Evaluation, and Research” CIP umbrella were included.  We initially identified a total of 50 programs. 

Our team collected and reviewed available program- and course-level data from program websites, handbooks, and catalogs, and assessed which elements were necessary to answer the research questions. We created a comprehensive data code book based on agreed upon definitions and met regularly throughout the data collection process to assess progress, discuss ambiguous data, and refine definitions as needed. More than 14 program-level data points were collected, including program overview, total credit hours required, and number of dissertation hours required. Additionally, available course data were collected, including course number, name, type, level, requirement level, description, and credit hours. While 50 programs were identified, only 36 of the 50 programs were included in the final analysis due to unavailable or incomplete data. After collecting detailed information for the 36 programs, course-level information was coded based on the variables of interest: course type, course level, and requirement level.  

Results 

​​​Prevalence of qualitative & mixed methods courses 

The team analyzed data from 1,134 courses representing 36 programs, both in aggregate and within individual programs. Results show that only 14% (n=162) of the courses offered or required to graduate were identified as primarily qualitative and only 1% (n=17) of these courses were identified as mixed methods research (MMR). Further, only 6% (n=70) of these courses were identified as evaluation courses (Table 1). Out of 36 programs, three programs offered no qualitative courses. Qualitative courses made up somewhere between 1% and 20% of course offerings for 28 programs. Only five of the programs reviewed exceeded 20%. Only 12 programs offered any mixed methods courses and MMR courses made up less than 10% of the course offerings in each of those programs. 

Table 1. 

Aggregate Course Data by Type and Representation


Course Type                                        n (%)                            Program Count


Quantitative Methods                         409 (36%)                        36 (100%)

Other                                                  317 (28%)                        36 (100%)

Qualitative Methods                           162 (14%)                        33 (92%)

Research Methods                             159 (14%)                       36 (100%)

Program Evaluation                            70 (6%)                           36 (100%)

Mixed Methods                                    17 (1%)                          12 (33%)


Total                                                    1,134 (100%)                         –

 

Requirement level of qualitative and mixed method courses 

Out of 162 qualitative courses, 41% (n=66) were listed as required, 43% (n=69) were listed as elective, and 16% (n=26) were listed as optional (figure 2). Out of 17 mixed methods research courses, 65% (n=11) were listed as required and 35% (n=6) were listed as elective.  

Course level of qualitative and mixed-method courses 

Out of 162 qualitative courses, 73% (n=118) were offered at an advanced level and 27% (n=73) were offered at an introductory level. Out of 17 mixed methods research courses, 71% (n=12) were offered at an advanced level and 29% (n=5) were offered at an introductory level. 

Discussion 

Findings from the study provide valuable insight into the landscape of doctoral curriculum in Educational Assessment, Evaluation, and Research programs. Both qualitative and mixed methods courses were underrepresented in the programs analyzed. However, the majority of course offerings were required and classified as advanced.​​​​ Given that various methodologies are needed to conduct rigorous evaluations, it is our hope that these findings will encourage doctoral training programs to include more courses on mixed and qualitative methods, and that they will encourage seasoned and novice evaluators to seek out training on these methodologies.  

This study highlights opportunities for collaborative work in the ESM program and ESM faculty’s commitment to fostering professional development.  The project began as a project for a research seminar. ESM faculty mentored us through proposal development, data collection and analysis, and dissemination. They also encouraged us to share our findings at conferences and in journals and helped us through the process of drafting and submitting abstracts and manuscripts. Faculty worked closely with our team through every step of the process, serving as both expert consultants and supportive colleagues.  

The study also highlights how messy data can get. Our team even affectionately nicknamed the project “​​messy MESA,” owing to challenges, including changes to the scope, missing data, and changes to the team as students left and joined, along with the common acronym for measurement, evaluation, statistics, and assessment (MESA). While I hope that the product of our study will contribute to the fields of evaluation, assessments, and applied research, the process has made me a better researcher.  

References 

American Evaluation Association. (2018.). AEA evaluator competencies. https://www.eval.org/About/Competencies-Standards/AEA-Evaluator-Competencies  

Bamberger, M. (2015). Innovations in the use of mixed methods in real-world evaluation. Journal of Development Effectiveness, 7(3), 317–326. https://doi.org/10.1080/19439342.2015.1068832 

Capraro, R. M., & Thompson, B. (2008). The educational researcher defined: What will future researchers be trained to do? The Journal of Educational Research, 101, 247-253. doi:10.3200/JOER.101.4.247-253 

Dillman, L. (2013). Evaluator skill acquisition: Linking educational experiences to competencies. The American Journal of Evaluation, 34(2), 270–285. https://doi.org/10.1177/1098214012464512 

Engle, M., Altschuld, J. W., & Kim, Y. C. (2006). 2002 Survey of evaluation preparation programs in universities: An update of the 1992 American Evaluation Association–sponsored study. American Journal of Evaluation, 27(3), 353-359.  

LaVelle, J. M. (2020). Educating evaluators 1976–2017: An expanded analysis of university-based evaluation education programs. American Journal of Evaluation, 41(4), 494-509. 

LaVelle, J. M., & Donaldson, S. I. (2015). The state of preparing evaluators. In J. W. Altschuld & M.Engle (Eds.), Accreditation, certification, and credentialing: Relevant concerns for U.S. evaluators. New Directions for Evaluation,145, 39–52. 

Leech, N. L., & Goodwin, L. D. (2008). Building a methodological foundation: Doctoral-Level methods courses in colleges of education. Research in the Schools, 15(1). 

Leech, N. L., & Haug, C. A. (2015). Investigating graduate level research and statistics courses in schools of education. International Journal of Doctoral Studies, 10, 93-110. Retrieved from http://ijds.org/Volume10/IJDSv10p093-110Leech0658.pdf 

Levine, A. (2007). Educating researchers. Washington, DC: The Education Schools Project. 

Mathison, S. (2008). What is the difference between evaluation and research—and why do we care. Fundamental Issues in Evaluation, 183-196. 

McAdaragh, M. O., & LaVelle, J. M., & Zhang, L. (2020). Evaluation and supporting inquiry  

courses in MSW programs. Research on Social Work Practice, 30(7), 750-759.  

doi:10.1177/1049731520921243 

McEwan, H., & Slaughter, H. (2004). A brief history of the college of education’s doctoral  

degrees. Educational Perspectives, 2(37), 3-9. Retrieved from  

https://files.eric.ed.gov/fulltext/EJ877606.pdf 

National Center for Education Statistics. (2020). The Classification of Instructional Programs [Data set]. https://nces.ed.gov/ipeds/cipcode/default.aspx?y=56.  

Page, R. N. (2001). Reshaping graduate preparation in educational research methods: One school’s experience. Educational Researcher, 30(5), 19-25. 

Patton, M.Q. (2014). Qualitative evaluation and research methods (4th ed.). Sage Publications. 

Paul, C. A. (n.d.). Elementary and Secondary Education Act of 1965. Social Welfare History  

Project. Retrieved from  

https://socialwelfare.library.vcu.edu/programs/education/elementary-and-secondary-educ 

ation-act-of-1965/ 

Seidling, M. B. (2015). Evaluator certification and credentialing revisited: A survey of American Evaluation Association members in the United States. In J. W. Altschuld & M. Engle (Eds.), Accreditation, certification, and credentialing: Relevant concerns for U.S. evaluators. New Directions for Evaluation,145, 87–102 

Skolits, G. J., Morrow, J. A., & Burr, E. M. (2009). Reconceptualizing evaluator roles. American Journal of Evaluation, 30(3), 275-295. 

Standerfer, L. (2006). Before NCLB: The history of ESEA. Principal Leadership, 6(8), 26-27. 

Trevisan, M. S. (2004). Practical training in evaluation: A review of the literature. American Journal of Evaluation, 25(2), 255-272. 

Warner, L. H. (2020). Developing interpersonal skills of evaluators: A service-learning approach. American Journal of Evaluation, 41(3), 432-451. 

 

Filed Under: Evaluation Methodology Blog, News

Learning to Learn New Research Methods: How Watching YouTube Helped Me Complete My First Client Facing Project

Learning to Learn New Research Methods: How Watching YouTube Helped Me Complete My First Client Facing Project

December 15, 2023 by Jonah Hall

Learning to Learn New Research Methods: How Watching YouTube Helped Me Complete My First Client Facing Project

By Austin Boyd

Every measurement, evaluation, statistics, and assessment (MESA) professional ​​​​​​​​has their own “bag of tricks” to help them get the job done​,​ their go-to set of evaluation, statistical, and methodological skills and tools that they are most comfortable applying. For many, these are the skills and tools that they were taught directly while obtaining their MESA degrees. But what do we do when we need new tools and methodologies that we weren’t taught directly by a professor?  

My name is Austin Boyd, and I am a​​ researcher, instructor, UTK ESM alumni, and most importantly, a lifelong learner. I have had the opportunity to work on projects in several different research areas including psychometrics, para-social relationships, quality in higher education, and social network analysis. I seek out ​opportunities to learn​ about new areas of research while applying my MESA skill set in any area of research I can. My drive to enter new research areas often leads to me realizing that, while I feel confident in the MESA skills and tools I currently possess, these are only a fraction of what I could be using in a given project. This leads me to two options: 1) use a ​​​​​​method that I am comfortable with that might not be the perfect choice for the project; or 2) learn a new method that fits the needs of the project. Obviously, we have to choose option 2, but where do we even start learning a new ​research ​method?  

In my first year of graduate school, I took on an evaluation client who had recently learned about ​​Social Network Analysis (SNA), which is a method of visually displaying the social structure between social objects in terms of their relationships (Tichy & Fombrun, 1979) The​ client​ decided that this new analysis would revolutionize the way they looked at their professional development attendance but had no idea how to use it. This is where I came in, a new and excited PhD student, ready to take on the challenge. Except, SNA wasn’t something we would be covering in class. In fact, it wasn’t something covered in any of the classes I could take. I had to begin teaching myself something that I had only just heard of. This is where I learned two of the best starting points for any new researcher: Google and YouTube.  

Although they aren’t the most conventional starting points for learning, you would be surprised how convenient they can be. I could have begun by looking in the literature for articles or textbooks that covered SNA. However, I didn’t have time to go through an entire textbook on the topic in addition to my normal coursework, and most of the articles I found were applied research, far above my current understanding. What I needed was an entry point that began with the basics of conducting an SNA. Google, unlike the journal articles, was able to take me to several websites covering the basics of SNA and even led me to free online trainings on SNA for beginners. YouTube was able to supplement this knowledge with step-by-step video instructions on how to conduct my own SNA, both in software I was already proficient in, and in Gephi (Bastian, Heymann, & Jacomy, 2009), a new software designed specifically for this ​​​​analysis. For examples of these friendly starting points, see the SNA resources below. 

Marvel Web Image

 

These videos and websites weren’t perfect, and certainly weren’t what I ended up citing in my final report to my client, but they were a starting ​​point. A stepping stone that got me to a place where reading literature didn’t leave me confused, frustrated, and scared that I would have to abandon a project. This allowed me to successfully complete my first client facing research project, and they were equally thrilled with the results. Eventually, I even became comfortable enough to see areas for improvement in the literature, leading me to author my own paper creating a function that could reformat data to be used in one and two mode undirected social network analysis (Boyd & Rocconi, 2021). I’ve even used my free time to apply what I learned for fun and created a social network for the Marvel Cinematic Universe and the Pokémon game franchise (see below). 

​​It is unrealistic to expect to master every type of data analysis method that exists ​in just four years of graduate school. And even if we could, the field continues to expand every day with new methods, tools, and programs being added to aid in conducting research. This requires us to all be lifelong learners, who aren’t afraid to learn new skills, even if it means starting by watching some YouTube videos. 

 

​​​References 

Bastian M., Heymann S., & Jacomy M. (2009). Gephi: An open source software for exploring and manipulating networks. International AAAI Conference on Weblogs and Social Media. From AAAI 

Boyd, A. T., & Rocconi, L. M. (2021). Formatting data for one and two mode undirected social network analysis. Practical Assessment, Research & Evaluation, 26(24). Available online: https://scholarworks.umass.edu/pare/vol26/iss1/24/  

Tichy, N., & Fombrun, C. (1979). Network Analysis in Organizational Settings. Human Relations, 32(11), 923– 965. https://doi.org/10.1177/001872677903201103 

SNA Resources 

Aggarwal, C. C. (2011). An Introduction to Social Network Data Analytics. Social Network Data Analytics. Springer, Boston, MA 

Yang, S., Keller, F., & Zheng, L. (2017). Social network analysis: methods and examples. Los Angeles: Sage. 

https://visiblenetworklabs.com/guides/social-network-analysis-101/ 

https://github.com/gephi/gephi/wiki 

https://towardsdatascience.com/network-analysis-d734cd7270f8 

https://virtualitics.com/resources/a-beginners-guide-to-network-analysis/ 

https://ladal.edu.au/net.html 

Videos 

https://www.youtube.com/watch?v=xnX555j2sI8&ab_channel=DataCamp 

https://www.youtube.com/playlist?list=PLvRW_kd75IZuhy5AJE8GUyoV2aDl1o649 

https://www.youtube.com/watch?v=PT99WF1VEws&ab_channel=AlexandraOtt 

https://www.youtube.com/playlist?list=PL4iQXwvEG8CQSy4T1Z3cJZunvPtQp4dRy 

 

Filed Under: Evaluation Methodology Blog

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • Next Page »

Recent Posts

  • Empathy in Evaluation: A Meta-Analysis Comparing Evaluation Models in Refugee and Displaced Settings
  • Irwin Recognized As Emerging Professional By ACPA
  • Is Your Data Dirty? The Importance of Conducting Frequencies First
  • Boyd Receives Legacy of Excellence Award From ASCA
  • David Hamilton Recognized as Field Award Recipient

Recent Comments

No comments to show.

College of Arts & Sciences

117 Natalie L. Haslam Music Center
1741 Volunteer Blvd.
Knoxville TN 37996-2600

Phone: 865-974-3241

Archives

  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • May 2022
  • September 2021
  • August 2021
  • February 2021
  • September 2020
  • August 2020
  • June 2020
  • May 2020
  • March 2020
  • February 2020
  • November 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • August 2018
  • June 2018
  • May 2018
  • April 2018
  • February 2018
  • December 2017
  • October 2017
  • August 2017

Categories

  • Accolades
  • CEL
  • CSP
  • EDAM
  • Evaluation Methodology Blog
  • Graduate Spotlights
  • HEAM
  • Leadership Studies News
  • News
  • PERC
  • Presentations
  • Publications
  • Research
  • Uncategorized

Copyright © 2025 · UT Knoxville Genesis Child for CEHHS on Genesis Framework · WordPress · Log in

Educational Leadership and Policy Studies

325 Bailey Education Complex
Knoxville, Tennessee 37996

Phone: 865-974-2214
Fax: 865.974.6146

The University of Tennessee, Knoxville
Knoxville, Tennessee 37996
865-974-1000

The flagship campus of the University of Tennessee System and partner in the Tennessee Transfer Pathway.

ADA Privacy Safety Title IX