• Request Info
  • Visit
  • Apply
  • Give
  • Request Info
  • Visit
  • Apply
  • Give

Search

  • A-Z Index
  • Map

Educational Leadership and Policy Studies

  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • Educational Administration
    • Evaluation Programs
    • Higher Education Administration
    • Undergraduate Studies
    • Student Affairs and Higher Education (Pending CRC Approval)
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • Educational Administration
    • Evaluation Programs
    • Higher Education Administration
    • Undergraduate Studies
    • Student Affairs and Higher Education (Pending CRC Approval)
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
Home » Evaluation Methodology Blog » Page 4

Timing is Everything… Or Is It? How Do We Incentivize Survey Participation?

Timing is Everything… Or Is It? How Do We Incentivize Survey Participation?

Timing is Everything… Or Is It? How Do We Incentivize Survey Participation?

February 1, 2024 by Jonah Hall

Timing is Everything… Or Is It? How Do We Incentivize Survey Participation?

By M. Andrew Young

Hello! My name is M. Andrew Young. I am a second-year Ph.D. student in the Evaluation, Statistics, and Methodology Ph.D. program here at UT-Knoxville. I currently work in higher education assessment as a Director of Assessment at East Tennessee State University’s college of Pharmacy.

Let me tell you a story; and you are the main character!

4:18pm Friday Afternoon:

Aaaaaand *send*.

You put the finishing touches on your email. You’ve had a busy, but productive day. Your phone buzzes. You reach down to the desk and turn on the screen to see a message from your friends you haven’t seen in a while.

Tonight still good?

“Oh no! I forgot!” You tell yourself as you flop back in your office chair. “I was supposed to bring some drinks and a snack to their house tonight.”

As it stands – you have nothing.

You look down at your phone while you recline in your office chair, searching for “grocery stores near me.” You find the nearest result and bookmark it for later. You have a lot left to do, and right now, you can’t be bothered.

Yes! I am super excited! When is everyone arriving? You type hurriedly in your messaging app and hit send.

You can’t really focus on anything else. One minute passes by and your phone lights up again with the notification of a received text message.

Everyone is getting here around 6. See you soon!

Thanks! Looking forward to it!

You lay your phone down and dive back into your work.

4:53pm:

Work is finally wrapped up. You pack your laptop into your backpack, grab a stack of papers, joggle them on your desk to get them at least a little orderly before you jam them in the backpack. You shut your door and rush to your vehicle. You start your car, navigate to the grocery store you bookmarked earlier.

“17 minutes to your destination,” your GPS says.

5:12pm:

It took two extra minutes to arrive because, as usual, you caught the stoplights on the wrong rotation. You finally find a parking spot, shuffle out of your car and head toward the entrance.

You freeze for a moment. You see them.

You’ve seen them many times, and you always try to avoid them. You know there is going to be the awkward interaction of a greeting, a request of some sort; usually for money. Your best course of action is to ignore them. Everyone knows that you hear them, but it is a small price to pay in your hurry.

Sure enough, “Hello! Can you take three minutes of your time to answer a survey? We’ll give you a free t-shirt for your time!”

You shoot them a half smile and a glance as you pick up your pace and rush past the pop-up canopy and table stacked with items you barely pay attention to as you pass.

Shopping takes longer than you’d hoped. The lines are long at this time of day. You don’t have much, just an armful of goods, but no matter, you must wait your turn. Soon, you make your way out of the store to be unceremoniously accosted again.

5:32pm:

You have to drive across town. Now, you won’t even have enough time to go home and change before your dinner engagement. You rush towards the door. The sliding doors part as you pass through the entrance, right by them.

“Please! If you will take three minutes, we will give you a T-shirt. We really want your opinion on an important matter in your community!”

You gesture with your hand and explain, “I’m sorry, but I’m in a terrible rush!”

——————————————————————————————–

So, what went wrong for the survey researchers? Why didn’t you answer the survey? They were at the same place at the same time as you. They offered you an incentive to participate. They specified that it was only going to take three minutes of your time to complete. So, why did you brush them off as you have many other charities and survey givers in the past situated in front of your store of choice?

Oftentimes, we are asked for our input, or our charity, but before we even receive the first invitation, we have already determined that we will not participate. Why? In this scenario, you were in a hurry. The incentive they were offering was not motivating to you.

Would it have changed your willingness to participate if they offered a $10 gift card to the store you were visiting? Maybe, maybe not.

The question is, more and more, how do we incentivize participation in a survey? Paper, online, person-to-person. They are all suffering by the conundrum of falling response rates (Lindgren et al., 2020). This impacts the validity of your research study. How can you ensure that you are getting heterogeneous sampling from populations? How can you be sure that you are getting the data you need from the people you want to sample? This can be a challenge.

In recent published works on survey incentives, many studies acknowledge that time and place affects participation, but we don’t quite understand how. Some studies, such as Lindgren et al. (2020), have tried to determine the time of day and day of week to invite survey participants, but they themselves discuss the limitations in their study, which is endemic to many studies, which is the lack of heterogeneity of participants and the interplay of response and nonresponse bias:

While previous studies provide important empirical insights into the largely understudied role of timing effects in web surveys, there are several reasons why more research on this topic is needed. First, the results from previous studies are inconclusive regarding whether the timing of the invitation e-mails matter in web survey modes (Lewis & Hess, 2017, p. 354). Secondly, existing studies on timing effects in web surveys have mainly been conducted in an American context, with individuals from specific job sectors (where at least some can be suspected to work irregular hours and have continuous access to the Internet). This makes research in other contexts than the American, and with more diverse samples of individuals, warranted (Lewis & Hess, 2017, p. 361; Sauermann & Roach, 2013, p. 284). Thirdly, only the Lewis and Hess (2017), Sauermann and Roach (2013), and Zheng (2011) studies are recent enough to provide dependable information to today’s web survey practitioners, due to the significant, and rapid changes in online behavior the past decades. (p. 228)

Timing, place/environment, and matching the incentive to the situation and participant (and maybe even topic, if possible) is influential in improving response rates. Best practices indicate that pilot testing survey items can help create a better survey, but how about finding what motivates your target population to even agree to begin the survey in the first place? That is less explored, and I think is an opportunity for further study.

This gets even harder when you are trying to reach hard-to-reach populations. Many times, it takes a variety of approaches, but what is less understood, is how to decide on your initial approach. The challenge that other studies have run into, and something that I think will continue to present itself as a hurdle is this: because of the lack of research on timing and location, and because of the lack of heterogeneity in the studies that do exist, the generalizability of studies is limited, if not altogether impractical. So, that leads me full-circle back to pilot-testing incentives and timing for surveys. Get to know your audience!

Cool Citations to Read:

Guillory, J., Wiant, K. F., Farrelly, M., Fiacco, L., Alam, I., Hoffman, L., Crankshaw, E., Delahanty, J., & Alexander, T. N. (2018). Recruiting Hard-to-Reach Populations for Survey Research: Using Facebook and Instagram Advertisements and In-Person Intercept in LGBT Bars and Nightclubs to Recruit LGBT Young Adults. J Med Internet Res, 20(6), e197. https://doi.org/10.2196/jmir.9461

Lindgren, E., Markstedt, E., Martinsson, J., & Andreasson, M. (2020). Invitation Timing and Participation Rates in Online Panels: Findings From Two Survey Experiments. Social Science Computer Review, 38(2), 225–244. https://doi.org/10.1177/0894439318810387

Robinson, S. B., & Leonard, K. F. (2018). Designing Quality Survey Questions. SAGE Publications, Inc. [This is our required book in Survey Research!]

Smith, E., Loftin, R., Murphy-Hill, E., Bird, C., & Zimmermann, T. (2013). Improving developer participation rates in surveys. 2013 6th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE), 89–92. https://doi.org/10.1109/CHASE.2013.6614738

Smith, N. A., Sabat, I. E., Martinez, L. R., Weaver, K., & Xu, S. (2015). A Convenient Solution: Using MTurk To Sample From Hard-To-Reach Populations. Industrial and Organizational Psychology, 8(2), 220–228. https://doi.org/10.1017/iop.2015.29

Neat Websites to Peek At:

https://blog.hubspot.com/service/best-time-send-survey (limitations, again, no demographics understanding, they did say to not send it in high-volume work times, but not everyone works the same type of M-F 8:00am-4:30pm job)

https://globalhealthsciences.ucsf.edu/sites/globalhealthsciences.ucsf.edu/files/tls-res-guide-2nd-edition.pdf (this is targeted directly towards certain segments of hard-to-reach populations. Again, generalizability challenges, but the idea is there)

Filed Under: Evaluation Methodology Blog

Making the Most of Your Survey Items: Item Analysis

Making the Most of Your Survey Items: Item Analysis

January 15, 2024 by Jonah Hall

Making the Most of Your Survey Items: Item Analysis

By Louis Rocconi, Ph.D. 

Hi, blog world! My name is Louis Rocconi, and I am an Associate Professor and Program Coordinator in the Evaluation, Statistics, and Methodology program at The University of Tennessee, and I am MAD about item analysis. In this blog post, I want to discuss an often overlooked tool to examine and improve survey items: Item Analysis.

What is Item Analysis?

Item analysis is a set of techniques used to evaluate the quality and usefulness of test or survey items. While item analysis techniques are frequently used in test construction, these techniques are helpful when designing surveys as well. Item analysis focuses on individual items rather than the entire set of items (such as Cronbach’s alpha). Item analysis techniques can be used to identify how individuals respond to items and how well items discriminate between those with high and low scores. Item analysis can be used during pilot testing to help choose the best items for inclusion in the final set. While there are many methods for conducting item analysis, this post will focus on two methods: item difficulty/endorsability and item discrimination.

Item Difficulty/Endorsability

Item difficulty, or item endorsability, is simply the mean, or average, response (Meyer, 2014). For test items that have a “correct” response, we use the term item difficulty, which refers to the proportion of individuals who answered the item correctly. However, when using surveys with Likert-type response options (e.g., strongly disagree, disagree, agree, strongly agree), where there is no “correct” answer, we can think of the item mean as item endorsability or the extent to which the highest response option is endorsed. We often divide the mean, or average response, by the maximum possible response to put endorsability on the same scale as difficulty (i.e., ranging from 0 to 1).

A high difficulty (i.e., close to 1) indicates an item that is too easy, while a low difficulty value (i.e., close to 0) suggests an overly difficult item or an item that few respondents endorse. Typically, we are looking for difficulty values between 0.3 and 0.7. Allen and Yen (1979) argue this range maximizes the information a test provides about differences among respondents. While Allen and Yen were referring to test items, surveys with Likert-type response options generally follow the same recommendations. An item with a low endorsability indicates that people are having a difficult time endorsing the item or selecting higher response options such as strongly agree. Whereas, an item with a high endorsability indicates that the item is easy to endorse. Very high or very low values for difficulty/endorsability may indicate that we need to review the item. Examining proportions for each response option is also useful. It demonstrates how frequently a response category was used. If a response category is not used or only selected by a few respondents, this may indicate that the item is ambiguous or confusing.

Item Discrimination

Item discrimination is a measure of the relationship between scores on an item and the overall score on the construct the survey is measuring (Meyer, 2014). It measures the degree to which an item differentiates individuals who score high on the survey from those who score low on the survey. It aids in determining whether an item is positively or negatively correlated with the total performance. We can think of item discrimination as how well an item is tapping into the latent construct. Discrimination is typically measured using an item-total correlation to assess the relationship between an item and the overall score. Pearson’s correlation and its variants (i.e., point-biserial correlation) are the most common, but other types of correlations such as biserial and polychoric correlations can be used.

Meyer (2014) suggests selecting items with positive discrimination values between 0.3 and 0.7 and items that have large variances. When the item-total correlation exceeds 0.7, it suggests the item may be redundant. A content analysis or expert review panel could be used to help decide which items to keep. A negative discrimination for an item suggests that the item is negatively related with the total score. This may suggest a data entry error, a poorly written item, or that the item needs to be reverse coded. Whatever the case, negative discrimination is a flag to let you know to inspect that item. Items with low discrimination tap into the construct poorly and should be revised or eliminated. Very easy or very difficult items can also cause low discrimination, so it is good to check whether that is a reason as well. Examining discrimination coefficients for each response option is also helpful. We typically want to see a pattern where lower response options (e.g., strongly disagree, disagree) have negative discrimination coefficients and higher response options (e.g., agree, strongly agree) have positive correlations and the magnitude of the correlations is highest at the ends of the response scale (we would look for the opposite pattern if the item is negatively worded).

Conclusion

Item difficulty/endorsability and item discrimination are two easy techniques researcher can use to help improve the quality of their survey items. These techniques can easily be implemented when examining other statistics such as internal consistency reliability.

___________________________________________________________________

References

Allen, M. & Yen, W. (1979). Introduction to measurement theory. Wadsworth.

Meyer, J. P. (2014). Applied measurement with jMetrik. Routledge.

Resources

I have created some R code and output to demonstrate how to implement and interpret an item analysis.

The Standards for Educational and Psychological Testing

Filed Under: Evaluation Methodology Blog

Are Evaluation PhD Programs Offering Training in Qualitative and Mixed Design Methodologies

Are Evaluation PhD Programs Offering Training in Qualitative and Mixed Design Methodologies

January 1, 2024 by Jonah Hall

Are Evaluation PhD Programs Offering Training in Qualitative and Mixed Design Methodologies

By Kiley Compton

Hello! My name is Kiley Compton and I am a fourth-year doctoral student in UT’s Evaluation, Statistics, and Methodology (ESM) program. My research interests include program evaluation, research administration, and sponsored research metrics.  

One of the research projects I worked on as part of the ESM program examined curriculum requirements in educational evaluation, assessment, and research (EAR) doctoral programs.  Our team was comprised of first- and second-year ESM doctoral students with diverse backgrounds, research interests, and skill sets.  

An overwhelming amount of preliminary data forced us to reconsider the scope of the project. The broad focus of the study was not manageable, so we narrowed the scope and focused on the prevalence of mixed method and qualitative research methodology courses offered in U.S. PhD programs.  Experts in the field of evaluation encourage the use of qualitative and mixed method approaches to gain an in-depth understanding of the program, process, or policy being evaluated (Bamberger, 2015; Patton, 2014).  The American Evaluation Association developed a series of competencies to inform evaluation education and training standards, which includes competency in “quantitative, qualitative, and mixed designs” methodologies (AEA, 2018). Similarly, Skolits et al. (2009) advocate for professional training content that reflects the complexity of evaluations.  

This study was guided by the following research question: what is the prevalence of qualitative and mixed methods courses in Educational Assessment, Evaluation, and Research PhD programs? Sub-questions include 1) to what extent are the courses required, elective, or optional? and 2) to what extent are these courses offered at more advanced levels? For the purpose of this study, elective courses are those that fulfill a specific, focused requirement, while optional courses are those that are offered but do not fulfill elective requirements.  

Methods 

This study focused on PhD programs similar to UT’s ESM program. PhD programs from public and private institutions were selected based on the U.S. Department of Education’s National Center for Education Statistics (NCES) Classification of Instructional Programs (CIP) assignment. Programs under the 13.06 “Educational Assessment, Evaluation, and Research” CIP umbrella were included.  We initially identified a total of 50 programs. 

Our team collected and reviewed available program- and course-level data from program websites, handbooks, and catalogs, and assessed which elements were necessary to answer the research questions. We created a comprehensive data code book based on agreed upon definitions and met regularly throughout the data collection process to assess progress, discuss ambiguous data, and refine definitions as needed. More than 14 program-level data points were collected, including program overview, total credit hours required, and number of dissertation hours required. Additionally, available course data were collected, including course number, name, type, level, requirement level, description, and credit hours. While 50 programs were identified, only 36 of the 50 programs were included in the final analysis due to unavailable or incomplete data. After collecting detailed information for the 36 programs, course-level information was coded based on the variables of interest: course type, course level, and requirement level.  

Results 

​​​Prevalence of qualitative & mixed methods courses 

The team analyzed data from 1,134 courses representing 36 programs, both in aggregate and within individual programs. Results show that only 14% (n=162) of the courses offered or required to graduate were identified as primarily qualitative and only 1% (n=17) of these courses were identified as mixed methods research (MMR). Further, only 6% (n=70) of these courses were identified as evaluation courses (Table 1). Out of 36 programs, three programs offered no qualitative courses. Qualitative courses made up somewhere between 1% and 20% of course offerings for 28 programs. Only five of the programs reviewed exceeded 20%. Only 12 programs offered any mixed methods courses and MMR courses made up less than 10% of the course offerings in each of those programs. 

Table 1. 

Aggregate Course Data by Type and Representation


Course Type                                        n (%)                            Program Count


Quantitative Methods                         409 (36%)                        36 (100%)

Other                                                  317 (28%)                        36 (100%)

Qualitative Methods                           162 (14%)                        33 (92%)

Research Methods                             159 (14%)                       36 (100%)

Program Evaluation                            70 (6%)                           36 (100%)

Mixed Methods                                    17 (1%)                          12 (33%)


Total                                                    1,134 (100%)                         –

 

Requirement level of qualitative and mixed method courses 

Out of 162 qualitative courses, 41% (n=66) were listed as required, 43% (n=69) were listed as elective, and 16% (n=26) were listed as optional (figure 2). Out of 17 mixed methods research courses, 65% (n=11) were listed as required and 35% (n=6) were listed as elective.  

Course level of qualitative and mixed-method courses 

Out of 162 qualitative courses, 73% (n=118) were offered at an advanced level and 27% (n=73) were offered at an introductory level. Out of 17 mixed methods research courses, 71% (n=12) were offered at an advanced level and 29% (n=5) were offered at an introductory level. 

Discussion 

Findings from the study provide valuable insight into the landscape of doctoral curriculum in Educational Assessment, Evaluation, and Research programs. Both qualitative and mixed methods courses were underrepresented in the programs analyzed. However, the majority of course offerings were required and classified as advanced.​​​​ Given that various methodologies are needed to conduct rigorous evaluations, it is our hope that these findings will encourage doctoral training programs to include more courses on mixed and qualitative methods, and that they will encourage seasoned and novice evaluators to seek out training on these methodologies.  

This study highlights opportunities for collaborative work in the ESM program and ESM faculty’s commitment to fostering professional development.  The project began as a project for a research seminar. ESM faculty mentored us through proposal development, data collection and analysis, and dissemination. They also encouraged us to share our findings at conferences and in journals and helped us through the process of drafting and submitting abstracts and manuscripts. Faculty worked closely with our team through every step of the process, serving as both expert consultants and supportive colleagues.  

The study also highlights how messy data can get. Our team even affectionately nicknamed the project “​​messy MESA,” owing to challenges, including changes to the scope, missing data, and changes to the team as students left and joined, along with the common acronym for measurement, evaluation, statistics, and assessment (MESA). While I hope that the product of our study will contribute to the fields of evaluation, assessments, and applied research, the process has made me a better researcher.  

References 

American Evaluation Association. (2018.). AEA evaluator competencies. https://www.eval.org/About/Competencies-Standards/AEA-Evaluator-Competencies  

Bamberger, M. (2015). Innovations in the use of mixed methods in real-world evaluation. Journal of Development Effectiveness, 7(3), 317–326. https://doi.org/10.1080/19439342.2015.1068832 

Capraro, R. M., & Thompson, B. (2008). The educational researcher defined: What will future researchers be trained to do? The Journal of Educational Research, 101, 247-253. doi:10.3200/JOER.101.4.247-253 

Dillman, L. (2013). Evaluator skill acquisition: Linking educational experiences to competencies. The American Journal of Evaluation, 34(2), 270–285. https://doi.org/10.1177/1098214012464512 

Engle, M., Altschuld, J. W., & Kim, Y. C. (2006). 2002 Survey of evaluation preparation programs in universities: An update of the 1992 American Evaluation Association–sponsored study. American Journal of Evaluation, 27(3), 353-359.  

LaVelle, J. M. (2020). Educating evaluators 1976–2017: An expanded analysis of university-based evaluation education programs. American Journal of Evaluation, 41(4), 494-509. 

LaVelle, J. M., & Donaldson, S. I. (2015). The state of preparing evaluators. In J. W. Altschuld & M.Engle (Eds.), Accreditation, certification, and credentialing: Relevant concerns for U.S. evaluators. New Directions for Evaluation,145, 39–52. 

Leech, N. L., & Goodwin, L. D. (2008). Building a methodological foundation: Doctoral-Level methods courses in colleges of education. Research in the Schools, 15(1). 

Leech, N. L., & Haug, C. A. (2015). Investigating graduate level research and statistics courses in schools of education. International Journal of Doctoral Studies, 10, 93-110. Retrieved from http://ijds.org/Volume10/IJDSv10p093-110Leech0658.pdf 

Levine, A. (2007). Educating researchers. Washington, DC: The Education Schools Project. 

Mathison, S. (2008). What is the difference between evaluation and research—and why do we care. Fundamental Issues in Evaluation, 183-196. 

McAdaragh, M. O., & LaVelle, J. M., & Zhang, L. (2020). Evaluation and supporting inquiry  

courses in MSW programs. Research on Social Work Practice, 30(7), 750-759.  

doi:10.1177/1049731520921243 

McEwan, H., & Slaughter, H. (2004). A brief history of the college of education’s doctoral  

degrees. Educational Perspectives, 2(37), 3-9. Retrieved from  

https://files.eric.ed.gov/fulltext/EJ877606.pdf 

National Center for Education Statistics. (2020). The Classification of Instructional Programs [Data set]. https://nces.ed.gov/ipeds/cipcode/default.aspx?y=56.  

Page, R. N. (2001). Reshaping graduate preparation in educational research methods: One school’s experience. Educational Researcher, 30(5), 19-25. 

Patton, M.Q. (2014). Qualitative evaluation and research methods (4th ed.). Sage Publications. 

Paul, C. A. (n.d.). Elementary and Secondary Education Act of 1965. Social Welfare History  

Project. Retrieved from  

https://socialwelfare.library.vcu.edu/programs/education/elementary-and-secondary-educ 

ation-act-of-1965/ 

Seidling, M. B. (2015). Evaluator certification and credentialing revisited: A survey of American Evaluation Association members in the United States. In J. W. Altschuld & M. Engle (Eds.), Accreditation, certification, and credentialing: Relevant concerns for U.S. evaluators. New Directions for Evaluation,145, 87–102 

Skolits, G. J., Morrow, J. A., & Burr, E. M. (2009). Reconceptualizing evaluator roles. American Journal of Evaluation, 30(3), 275-295. 

Standerfer, L. (2006). Before NCLB: The history of ESEA. Principal Leadership, 6(8), 26-27. 

Trevisan, M. S. (2004). Practical training in evaluation: A review of the literature. American Journal of Evaluation, 25(2), 255-272. 

Warner, L. H. (2020). Developing interpersonal skills of evaluators: A service-learning approach. American Journal of Evaluation, 41(3), 432-451. 

 

Filed Under: Evaluation Methodology Blog

Learning to Learn New Research Methods: How Watching YouTube Helped Me Complete My First Client Facing Project

Learning to Learn New Research Methods: How Watching YouTube Helped Me Complete My First Client Facing Project

December 15, 2023 by Jonah Hall

Learning to Learn New Research Methods: How Watching YouTube Helped Me Complete My First Client Facing Project

By Austin Boyd

Every measurement, evaluation, statistics, and assessment (MESA) professional ​​​​​​​​has their own “bag of tricks” to help them get the job done​,​ their go-to set of evaluation, statistical, and methodological skills and tools that they are most comfortable applying. For many, these are the skills and tools that they were taught directly while obtaining their MESA degrees. But what do we do when we need new tools and methodologies that we weren’t taught directly by a professor?  

My name is Austin Boyd, and I am a​​ researcher, instructor, UTK ESM alumni, and most importantly, a lifelong learner. I have had the opportunity to work on projects in several different research areas including psychometrics, para-social relationships, quality in higher education, and social network analysis. I seek out ​opportunities to learn​ about new areas of research while applying my MESA skill set in any area of research I can. My drive to enter new research areas often leads to me realizing that, while I feel confident in the MESA skills and tools I currently possess, these are only a fraction of what I could be using in a given project. This leads me to two options: 1) use a ​​​​​​method that I am comfortable with that might not be the perfect choice for the project; or 2) learn a new method that fits the needs of the project. Obviously, we have to choose option 2, but where do we even start learning a new ​research ​method?  

In my first year of graduate school, I took on an evaluation client who had recently learned about ​​Social Network Analysis (SNA), which is a method of visually displaying the social structure between social objects in terms of their relationships (Tichy & Fombrun, 1979) The​ client​ decided that this new analysis would revolutionize the way they looked at their professional development attendance but had no idea how to use it. This is where I came in, a new and excited PhD student, ready to take on the challenge. Except, SNA wasn’t something we would be covering in class. In fact, it wasn’t something covered in any of the classes I could take. I had to begin teaching myself something that I had only just heard of. This is where I learned two of the best starting points for any new researcher: Google and YouTube.  

Although they aren’t the most conventional starting points for learning, you would be surprised how convenient they can be. I could have begun by looking in the literature for articles or textbooks that covered SNA. However, I didn’t have time to go through an entire textbook on the topic in addition to my normal coursework, and most of the articles I found were applied research, far above my current understanding. What I needed was an entry point that began with the basics of conducting an SNA. Google, unlike the journal articles, was able to take me to several websites covering the basics of SNA and even led me to free online trainings on SNA for beginners. YouTube was able to supplement this knowledge with step-by-step video instructions on how to conduct my own SNA, both in software I was already proficient in, and in Gephi (Bastian, Heymann, & Jacomy, 2009), a new software designed specifically for this ​​​​analysis. For examples of these friendly starting points, see the SNA resources below. 

Marvel Web Image

 

These videos and websites weren’t perfect, and certainly weren’t what I ended up citing in my final report to my client, but they were a starting ​​point. A stepping stone that got me to a place where reading literature didn’t leave me confused, frustrated, and scared that I would have to abandon a project. This allowed me to successfully complete my first client facing research project, and they were equally thrilled with the results. Eventually, I even became comfortable enough to see areas for improvement in the literature, leading me to author my own paper creating a function that could reformat data to be used in one and two mode undirected social network analysis (Boyd & Rocconi, 2021). I’ve even used my free time to apply what I learned for fun and created a social network for the Marvel Cinematic Universe and the Pokémon game franchise (see below). 

​​It is unrealistic to expect to master every type of data analysis method that exists ​in just four years of graduate school. And even if we could, the field continues to expand every day with new methods, tools, and programs being added to aid in conducting research. This requires us to all be lifelong learners, who aren’t afraid to learn new skills, even if it means starting by watching some YouTube videos. 

 

​​​References 

Bastian M., Heymann S., & Jacomy M. (2009). Gephi: An open source software for exploring and manipulating networks. International AAAI Conference on Weblogs and Social Media. From AAAI 

Boyd, A. T., & Rocconi, L. M. (2021). Formatting data for one and two mode undirected social network analysis. Practical Assessment, Research & Evaluation, 26(24). Available online: https://scholarworks.umass.edu/pare/vol26/iss1/24/  

Tichy, N., & Fombrun, C. (1979). Network Analysis in Organizational Settings. Human Relations, 32(11), 923– 965. https://doi.org/10.1177/001872677903201103 

SNA Resources 

Aggarwal, C. C. (2011). An Introduction to Social Network Data Analytics. Social Network Data Analytics. Springer, Boston, MA 

Yang, S., Keller, F., & Zheng, L. (2017). Social network analysis: methods and examples. Los Angeles: Sage. 

https://visiblenetworklabs.com/guides/social-network-analysis-101/ 

https://github.com/gephi/gephi/wiki 

https://towardsdatascience.com/network-analysis-d734cd7270f8 

https://virtualitics.com/resources/a-beginners-guide-to-network-analysis/ 

https://ladal.edu.au/net.html 

Videos 

https://www.youtube.com/watch?v=xnX555j2sI8&ab_channel=DataCamp 

https://www.youtube.com/playlist?list=PLvRW_kd75IZuhy5AJE8GUyoV2aDl1o649 

https://www.youtube.com/watch?v=PT99WF1VEws&ab_channel=AlexandraOtt 

https://www.youtube.com/playlist?list=PL4iQXwvEG8CQSy4T1Z3cJZunvPtQp4dRy 

 

Filed Under: Evaluation Methodology Blog

The What, When, Why, and How of Formative Evaluation of Instruction

The What, When, Why, and How of Formative Evaluation of Instruction

December 1, 2023 by Jonah Hall

The What, When, Why, and How of Formative Evaluation of Instruction

By M. Andrew Young

Hello! My name is M. Andrew Young. I am a second-year Ph.D. student in the Evaluation, Statistics, and Methodology Ph.D. program here at UT-Knoxville. I currently work in higher education assessment as a Director of Assessment at East Tennessee State University’s college of Pharmacy. As part of my duties, I am frequently called upon to conduct classroom assessments.  

Higher education assessment often employs the usage of summative evaluation of instruction, also commonly known as course evaluations, summative assessment of instruction (SAI), summative evaluation of instruction (SEI), among other titles, at the end of a course. At my institution the purpose of summative evaluation of instruction is primarily centered on evaluating faculty for tenure, promotion, and retention. What if there were a more student-centered approach to getting classroom evaluation feedback that not only benefits students in future classes (like summative assessment does), but also benefits students currently enrolled in the class? Enter formative evaluation of instruction, (FEI).  

 

What is FEI? 

FEI, sometimes referred to as midterm evaluations, entails seeking feedback from students prior to the semester midpoint to make mid-stream changes that will address each cohort’s individual learning needs. Collecting such meaningful and actionable FEI can prove to be challenging. Sometimes faculty may prefer to not participate in formative evaluation because they do not find the feedback from students actionable, or they may not value the student input. Furthermore, there is little direction on how to conduct this feedback and how to use it for continual quality improvement in the classroom. While there exists a lot of literature on summative evaluation of teaching, there seems to be a dearth of research surrounding best practices for formative evaluation of teaching. The few articles that I have been able to discover offer suggestions for FEI covered later in this post. 

 

When Should We Use FEI? 

In my opinion, every classroom can benefit from formative evaluation. When to administer it is as much an art as it is a science. Timing is everything and the results can differ greatly depending on the timing of the administration of the evaluation. In my time working as a Director of Assessment, I have found that the most meaningful feedback can be gathered in the first half of the semester, directly after a major assessment. Students have a better understanding of their comprehension of the material and the effectiveness of the classroom instruction. There is very little literature to support this, so this is purely anecdotal. None of the resources I have found have prescribed precisely when FEI should be conducted, but the name implies that the feedback should be sought at or around the semester midpoint. 

 

Why Should We Conduct FEI? 

FEI Can Help:

  • Improve student satisfaction on summative feedback of instruction (Snooks et al., 2007; Veeck et al., 2016),  
  • Make substantive changes to the classroom experience including textbooks, examinations/assessments of learning, and instructional methods (Snooks et al., 2007; Taylor et al., 2020) 
  • Strengthen teaching and improving rapport between students and faculty (Snooks et al., 2007; Taylor et al., 2020) 
  • Improve faculty development including promotion and tenure (Taylor et al., 2020; Veeck et al., 2016), encouraging active learning (Taylor et al., 2020) 
  • Bolster communication of expectations in a reciprocal relationship between instructor and student (Snooks et al., 2007; Taylor et al., 2020). 

 

How Should We Administer the FEI? 

Research has provided a wide variety of suggested practices including, but not limited to involving a facilitator for the formative evaluation, asking open-ended questions, using no more than ten minutes of classroom time, keeping it anonymous, and keeping it short (Holt & Moore, 1992; Snooks et al., 2007; Taylor et al., 2020), and even having students work in groups to provide the feedback or student conferencing (Fluckiger et al., 2010; Veeck et al., 2016).  

Hanover (2022) concluded that formative evaluation should include elements of: a 7-point Likert scale question evaluating how the course is going for the student followed by an open-ended explanation of rating question, involving the “Keep, Stop, Start” model with open-ended response-style questions, and finally, open-ended questions that allow students to suggest changes and provide additional feedback on the course and/or instructor. The “Keep, Stop, Start” model is applied by asking students what they would like the instructors to keep doing, stop doing, and/or start doing. In the college of pharmacy, we use the method that Hanover presented where we ask students to self-evaluate how well they feel they are doing in the class, and then explain their rating with an open-ended, free-response field. This has only been in practice at the college of pharmacy for the past academic year, and anecdotally from conversation with faculty, the data that has been collected has generally been more actionable for the faculty. Like all evaluations, it is not a perfect system and sometimes some of the data is not actionable, but in our college FEI is an integral part of indirect classroom assessment. The purpose is to collect and analyze themes that are associated with the different levels of evaluation rating. (Best Practices in Designing Course Evaluations, 2022). The most important step, however, is to close the feedback loop in a timely manner (Fluckiger et al., 2010; Taylor et al., 2020; Veeck et al., 2016). Closing the feedback loop for our purposes is essentially asking the course coordinator to respond to the feedback given in the FEI, usually within a week’s time, and detailing what changes, if any, will be made in the classroom and learning environment. Obviously, not all feedback is actionable, and in some cases, best practices in the literature conflict with suggestions made, but it is important for the students to know what can be changed and what cannot/will not be changed and why. 

 

What Remains? 

Some accrediting bodies (like the American Council for Pharmacy Education, or ACPE), require colleges to have an avenue for formative student feedback as part of their standards. I believe that formative evaluation benefits students and faculty alike, and where it may be too early to make a sweeping change and require FEI for every higher education institution, there may be value in educating faculty and assessment professionals of the benefits of FEI. Although outside the scope of this short blog post, adopting FEI as a common practice should be approached carefully, intentionally, and with best practices for change management in organizations. Some final thoughts: in order to get the students engaged in providing good feedback, ideally the practice of FEI has to be championed by the faculty. While it could be mandated by administration, that practice would likely not engender as much buy-in, and if the faculty, who are the primary touch-points for the students, aren’t sold on the practice or participate begrudgingly, that will create an environment where the data collected is not optimal and/or actionable. Students talk with each other across cohorts. If students in upper classes have a negative opinion on the process, that will have a negative trickle-down effect. What is the best way to make students disengage? Don’t close the feedback loop. 

 

References and Resources 

Best Practices in Designing Course Evaluations. (2022). Hanover Research. 

Fluckiger, J., Tixier, Y., Pasco, R., & Danielson, K. (2010). Formative Feedback: Involving Students as Partners in Assessment to Enhance Learning. College Teaching, 58, 136–140. https://doi.org/10.1080/87567555.2010.484031 

Holt, M. E., & Moore, A. B. (1992). Checking Halfway: The Value of Midterm Course Evaluation. Evaluation Practice, 13(1), 47–50. 

Snooks, M. K., Neeley, S. E., & Revere, L. (2007). Midterm Student Feedback: Results of a Pilot Study. Journal on Excellence in College Teaching, 18(3), 55–73. 

Taylor, R. L., Knorr, K., Ogrodnik, M., & Sinclair, P. (2020). Seven principles for good practice in midterm student feedback. International Journal for Academic Development, 25(4), 350–362. 

Veeck, A., O’Reilly, K., MacMillan, A., & Yu, H. (2016). The Use of Collaborative Midterm Student Evaluations to Provide Actionable Results. Journal of Marketing Education, 38(3), 157–169. https://doi.org/10.1177/0273475315619652 

 

Filed Under: Evaluation Methodology Blog

Evaluation in the Age of Emerging Technologies

Evaluation in the Age of Emerging Technologies

November 15, 2023 by Jonah Hall

Evaluation in the Age of Emerging Technologies

By Richard Amoako

Greetings! My name is Richard Dickson Amoako. I am a second year PhD. student in Evaluation, Statistics, and Methodology at the University of Tennessee, Knoxville. My research interests focus on areas such as program evaluation, impact evaluation, higher education assessment, and emerging technologies in evaluation.  

As a lover of technology and technological innovations, I am intrigued by technological advancements in all spheres of our lives. The most recent development is the increased development and improvement of artificial intelligence (AI) and machine learning (ML). As an emerging evaluator, I am interested in learning about the implications of these technologies for evaluation practice.  

Throughout this blog post, I explore the implications of these technologies for evaluation including relevant technologies useful for evaluation, how these technologies can change the conduct of evaluation, the benefits and opportunities for evaluators, as well as the challenges and issues with the use of these emerging technologies in evaluation.  

 

Relevant Emerging Technologies for Evaluation 

Emerging technologies are new and innovative tools, techniques, and platforms that can transform the evaluation profession. These technologies can broadly be categorized into four groups, data collection and management tools, data visualization and reporting tools, data analysis and modeling tools, and digital and mobile tools. Three examples of the most popular emerging technologies relevant to evaluation are artificial intelligence, machine learning, and big data analytics. 

  • Data collection and analysis: AI and ML can help evaluators analyze data faster and more accurately. These technologies can also identify patterns and trends that may not be apparent to the naked eye. Additionally, emerging technologies have also led to new data collection methods, such as crowdsourcing, social media monitoring, and web analytics. These methods provide valuable opportunities for evaluators to access a wider range of data sources and collect more comprehensive and diverse data. 
  • Increased access to data: Social media, mobile devices, and other technologies have made it easier to collect data from a wider range of sources. This can help evaluators gather more diverse perspectives and ideas. 
  • Improved collaboration: Evaluators can collaborate more effectively with the help of video conferencing, online collaboration platforms, and project management software, regardless of where they are located. 
  • Improved visualization: Evaluators can present their findings in a more engaging and understandable way by using emerging technologies like data visualization software and virtual reality. 

 

Challenges and Issues Associated with Emerging Technologies in Evaluation 

While emerging technologies offer many exciting opportunities for evaluators, they also come with challenges. One of the main challenges is keeping up to date with the latest technologies and trends. Evaluators should have a solid understanding of the technologies they use, as well as the limitations and potential biases associated with those technologies. In some cases, emerging technologies can be expensive or require specialized equipment, which can be a barrier for evaluators with limited resources. 

Another challenge is the need to ensure emerging technologies are used ethically and responsibly. As the use of emerging technologies in evaluation becomes more widespread, there is a risk that evaluators may inadvertently compromise the privacy and security of program participants. In addition, they may inadvertently misuse data. To address these challenges, our profession needs to develop clear guidelines and best practices for using these technologies in evaluation. 

To conclude, emerging technologies are revolutionizing the evaluation landscape, opening new opportunities for evaluators to collect, analyze, and use data. With artificial intelligence and machine learning, as well as real-time monitoring and feedback, emerging technologies are changing evaluation and increasing the potential for action-based research. However, as with any advancing technology, there are also challenges to resolve. Evaluators must keep up to date with the latest technologies and develop clear guidelines and best practices. They must also ensure that these technologies are used ethically and responsibly. 

 

Resources 

Adlakha D. (2017). Quantifying the modern city: Emerging technologies and big data for active living research. Frontiers in Public Health, 5, 105. https://doi.org/10.3389/fpubh.2017.00105 

Borgo, R., Micallef, L., Bach, B. McGee , F.,  Lee, B. (2018). Information visualization evaluation using crowdsourcing. STAR – State of The Art Report, 37(7). Available at:  https://www.microsoft.com/en-us/research/uploads/prod/2018/05/InfoVis-Crowdsourcing-CGF2018.pdf 

Dimitriadou, E., & Lanitis, A. A. (2023).  Critical evaluation, challenges, and future perspectives of using artificial intelligence and emerging technologies in smart classrooms. Smart Learn. Environ, 10, 12. https://doi.org/10.1186/s40561-023-00231-3 

Huda, M., Maseleno, A., Atmotiyoso, P., Siregar, M., Ahmad, R., Jasmi, K. A., & Muhamad, N. H. N. (2018). Big data emerging technology: Insights into innovative environment for online learning Resources. International Journal of Emerging Technologies in Learning (iJET), 13(01), pp. 23–36. https://doi.org/10.3991/ijet.v13i01.6990 

Jurafsky, D., & Martin, J. H. (2009). Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Prentice Hall. 

World Health Organization. (2016). Monitoring and evaluating digital health interventions: A practical guide to conducting research and assessment. WHO Press. Available at:  https://saluddigital.com/wp-content/uploads/sites/9/2019/06/WHO.-Monitoring-and-Evaluating-Digital-Health-Interventions.pdf 

Filed Under: Evaluation Methodology Blog

So, You Want to Be a Higher Education Assessment Professional? What Skills and Dispositions are Essential?

So, You Want to Be a Higher Education Assessment Professional? What Skills and Dispositions are Essential?

November 1, 2023 by Jonah Hall

So, You Want to Be a Higher Education Assessment Professional? What Skills and Dispositions are Essential?

By Jennifer Ann Morrow, PhD.

What does it take to be a competent higher education assessment professional? What skills and dispositions are needed in order to be successful in this field? I would get asked this question a lot from my students and while many times my go to answer is “it depends”, that answer would not suffice in preparing my students for this career path. So in order to give them a more comprehensive answer to this question I went to the literature. 

Although I have been teaching emerging assessment and evaluation professionals for the past 22 years and at various times coordinating both a Ph.D. and certificate program in Evaluation Statistics and Methodology I didn’t want to rely on just what our curriculum focuses on to answer their question. We educate students with diverse career paths (e.g., assessment professional, evaluator, faculty, data analyst, psychometrician) so our curriculum touches upon skills and dispositions across a variety of careers. Therefore, I delved deeper into the literature to give my students a more focused answer for their chosen career path. 

Guess what I found…. “it depends!”. There was little to no consistency or agreement within our field as to what are the essential competencies needed in order to be competent as a higher education assessment professional. So, depending on who you asked and what source you read the answer was different. While some sources touched upon needed knowledge and skills very few discussed dispositions that were essential to our professional practice. So my curious mind was racing and after some long discussions and reviewing literature with one of my fabulous graduate students, Nikki Christen, we started compiling lists of needed skills and dispositions from the literature. We soon realized that we needed to hear from higher education assessments professionals themselves to figure out what skills and dispositions were needed. So, a new research project was born! We brought on two other fabulous assessment colleagues, Dr. Gina Polychronopolous and Dr. Emilie Clucas Leaderman, and developed a national survey project to assess higher education assessment professionals’ perceptions of needed skills and dispositions in order to be effective in their job. I wanted to be able to give my students a better answer than “it depends!”. 

You can check out our article (https://www.rpajournal.com/dev/wp-content/uploads/sites/9/2022/03/A-Snapshot-of-Needed-Skills-RPA.pdf) for detailed information on our methodology and results for this project. We had 213 higher education assessment professionals from across the country rate the importance of 92 skills and 52 dispositions for our field. I’ll briefly summarize the results here and then offer my suggestions to those who are interested in this career path. 

 

Summary of Needed Skills 

We found that the most important skills were interpersonal ones! Collaborating with others on assessment, developing collaborative relationships with stakeholders, and working with faculty on assessment projects were the highest rated skills. One participant even stated, “assessment is about people!”. Building relationships, collaboration, facilitation, and communication were all salient themes here. Other skills that were highly rated related to disseminating information. Communicating assessment results to stakeholders, communicating assessment results in writing, and disseminating assessment results were all highly related by higher education assessment professionals. Leadership skills were also deemed highly important by participants. Advocating for the value of assessment, developing a culture of assessment within an organization, facilitating change in an organization using assessment data were all seen as key skills. Project management was also rated as highly important to be competent in this field. Managing time, managing projects, and managing people were highly valued skills by participants. Various aspects of assessment design, developing assessment tools, data management, engaging in ethical assessment were also highly rated. One unexpected finding was that teaching experience was mentioned by a number of assessment professionals as a needed skill in the open-ended responses (Ha, the educator forgot to ask about teaching!). 

 

Summary of Needed Dispositions 

Many dispositions were rated as highly important by our participants. One mentioned, “personally I feel dispositions are more vital than technical skills. You can learn the techniques but without the personality, you will have trouble motivating others!”. Interpersonal dispositions such as collaborative, honest, helpful, inclusive, and support were deemed highly important dispositions to have. Responsiveness was also highly rated. Dispositions like problem solver and adaptable were found to be highly important. Having a consistent work approach was important. Dispositions such as trustworthy, reliable, ethical, analytical, detail oriented, and strategic were highly rated in this category. Expression related dispositions were also seen as important. Being transparent, articulate, and professional were all highly rated. Other themes that emerged from the open-ended responses were flexibility, patience, ‘thick skin’, and ‘it depends’ (seriously, I didn’t even prompt them for that response!).  

 

Next Steps: Starting Your Journey as a Higher Education Assessment Professional 

So now what? Now that you have some idea of what skills and dispositions are needed in order to be successful as a higher education assessment professional, what are your next steps? My advice is threefold: read, engage, and collaborate. Read the latest articles in the leading assessment journals (see list below). Here you will find the latest trends, the leading scholars, and suggestions for all the unanswered questions that still need to be explored in our field. Engage in learning and networking opportunities in our field. Attend the many conferences, webinars and trainings (some are free!), and join a professional organization and get involved. The Association for the Assessment of Learning in Higher Education (AALHE) is one of my homes. They have always been welcoming, and I’ve made great connections by attending events and volunteering. Reach out to others in our field for advice, to discuss research/interests, and possible collaborations. Post a message on the ASSESS listserv asking for advice or to connect with others that have similar research interests. There are many ways to learn more about our field and to get involved…just put yourself out there. Good luck on your journey! 

 

References and Resources 

Christen, N., Morrow, J. A., Polychronopoulos, G. B., & Leaderman, E. C. (2023). What should be in an assessment professionals’ toolkit? Perceptions of need from the field. Intersection: A Journal at the Intersection of Assessment and Learning. https://aalhe.scholasticahq.com/article/57789-what-should-be-in-an-assessment-professionals-toolkit-perceptions-of-need-from-the-field/attachment/123962.pdf 

Gregory, D., & Eckert, E. (2014, June). Assessment essentials: Engaging a new audience (things student affairs personnel should know or learn). Paper presented at the annual Student Affairs Assessment and Research Conference, Columbus, OH. 

Hoffman, J. (2015). Perceptions of assessment competency among new student affairs professionals. Research & Practice in Assessment, 10, 46-62. https://www.rpajournal.com/dev/wp-content/uploads/sites/9/2015/12/A4.pdf 

Horst, S. J., & Prendergast, C. O. (2020). The Assessment Skills Framework: A taxonomy of assessment knowledge, skills and attitudes. Research & Practice in Assessment, 15(1). https://www.rpajournal.com/dev/wp-content/uploads/sites/9/2020/05/The-Assessment-Skills-Framework-RPA.pdf 

Janke, K. K., Kelley, K. A., Sweet, B. V., & Kuba, S. E. (2017). Cultivating an assessment head coach: Competencies for the assessment professional. Assessment Update, 29(6). doi:10.1002/au.30113 

Polychronopoulos, G. B., & Clucas Leaderman, E. (2019). Strengths-based assessment practice: Constructing our professional identities through reflection. NILOA Viewpoints. Retrieved from https://www.learningoutcomesassessment.org/wp-content/uploads/sites/9/2019/08/Viewpoints-Polychronopoulos-Leaderman.pdf 

AALHE: https://www.aalhe.org/  

AEFIS Academy: https://www.aefisacademy.org/global-category/assessment/?global_filter=all  

Assess Listserv: https://www.aalhe.org/assess-listserv 

Assessment and Evaluation in Higher Education: https://www.tandfonline.com/journals/caeh20 

Assessment in Education: Principles, Policy & Practice: https://www.tandfonline.com/loi/caie20 

Assessment Institute: https://assessmentinstitute.iupui.edu/ 

Assessment Update: https://onlinelibrary.wiley.com/journal/15360725 

Educational Assessment, Evaluation, and Accountability: https://www.springer.com/journal/11092 

Emerging Dialogues: https://www.aalhe.org/emerging-dialogues 

Intersection: A Journal at the Intersection of Assessment and Learning: https://www.aalhe.org/intersection 

JMU Higher Education Assessment Specialist Graduate Certificate: https://www.jmu.edu/pce/programs/all/assessment/index.shtml 

Journal of Assessment and Institutional Effectiveness: https://www.psupress.org/journals/jnls_jaie.html 

Journal of Assessment in Higher Education: https://journals.flvc.org/assessment 

Online Free Assessment Course: http://studentaffairsassessment.org/online-open-course 

Practical Assessment, Research, and Evaluation: https://scholarworks.umass.edu/pare/ 

Research & Practice in Assessment: https://www.rpajournal.com/ 

Rider University Higher Education Assessment Certificate: https://www.rider.edu/academics/colleges-schools/college-education-human-services/certificates-endorsements/higher-education-assessment 

Ten Trends in Higher Education Assessment: https://weaveeducation.com/assessment-meta-trends-higher-ed/ 

Weave Assessment Resources: https://weaveeducation.com/assessment-accreditation-webinars-ebooks-guides/?topic=assessment 

 

Filed Under: Evaluation Methodology Blog

So I Like Statistics, Now What?

So I Like Statistics, Now What?

October 15, 2023 by Jonah Hall

So I Like Statistics, Now What?

By Jake Working 

Whether you’ve taken a statistics class, recently read a report from your data analyst, or simply want to make data-driven decisions, something about statistics just clicked with you. But what comes next? What can you do with this newfound passion?

I’m Jake Working, a current PhD student in the Evaluation, Statistics, and Methodology at the University of Tennessee, and I had similar questions after my first statistics class in college. In this post, I’ll discuss methods and rationale to improve your statistical skill set and an introduction to the methodology, evaluation, statistics, and assessment (MESA) fields.

Overview

1. Explore Statistics: Methods to improve your statistical skill set

2. Discover Your Motivation: Refine your rationale for statistical application

3. What is MESA? An introduction to the fields

 

Explore Statistics

Now that you have found an interest, keep learning! If you are still in college, consider a statistics minor or simply taking a few courses outside your major. As an engineering student in college, I was able to take additional statistics-related courses, such as business statistics and statistical methods in Six Sigma. Most institutions offer topical statistical-based courses such as business statistics and quality methods, but it is important to consider foundational statistics courses taught in a mathematical environment to have a basic understanding of statistical theory and methodology.

Image Credit: XKCD

Creating a foundational knowledge of statistics does not have to be expensive, though. If you aren’t currently a college student, there are endless opportunities to gain statistical knowledge for free! A popular statistical analysis program, R, is available free and open source. I recommend an interface such as RStudio or BlueSky (both also free and open source) to use with R, and a certification course to get started (such as this one offered by Johns Hopkins). In the manufacturing industry, statistical analysis related to Six Sigma or quality control would be more beneficial, and there are many options to become Six Sigma certified.

 

Discover Your Motivation

Why did you initially enjoy statistics? I was drawn to multiple aspects related to statistical analysis such as data visualization and data-driven decision making which ultimately led me to the MESA field.

At first, I was motivated by statistical reporting and data visualization techniques that allowed complex, but useful, information to be distilled into digestible and easy to understand information. While it may be natural to some, data visualization is a learned and ever-changing process. If you are interested in this area, I recommend checking out Stephanie Evergreen’s Evergreen Data for data visualization checklists, best practices, and online courses!

Most importantly, I enjoyed being able to support any decision I made with data. This motivation allowed me to weave statistical methods for the purpose of data-driven decision making into any role I was working. Data-driven decision making is popular in any field, because it allows you to have substantial rationale and evidence to create progress. If you were like me, I enjoyed the field I was working in, and wanted to formally apply these motivations in my field. Enter the MESA fields.

 

What is MESA?

The interwoven fields of methodology, evaluation, statistics, and assessment (MESA) include a growing number of career opportunities for those who started with an initial passion for statistics. While you likely understand statistics, how do the other fields connect?

Methodology, in this application, relates to the systems (or methods) of gathering information related to a particular problem (Charles, 2019). It is the “how” you gather and address your question or problem. Examples of methodologies include qualitative, quantitative, and mixed methods. The Grad Coach has a great resource on defining research methodology. You can think of statistics and methodology as the tools used to conduct assessments and evaluations, the other areas of MESA.

Evaluation refers to the process of determining the merit, worth, or value of a process or the product of that process (Scriven, 1991, p. 139). One common area within this field is program evaluation, which focuses on the evaluation of program objectives and will lead to decisions regarding the program.

Assessment is often defined as “any effort to gather, analyze, and interpret evidence which describes institutional, divisional, or agency effectiveness (Upcraft & Schuh, 1996, p. 18). The main goal of assessment is to gather information in order to improve performance. Examples of assessment include standardized tests, surveys, homework or exams, or self-reflection (Formative, 2021).

If you’d like to gain an understanding of what type of careers lie within these fields, search for jobs related to: evaluation, assessment, methodologist, data analyst, psycho-metrics, or research analyst.

 

References

Charles, H. (2019). Research Methodology Definition [PowerPoint slides]. SlidePlayer. https://slideplayer.com/slide/13250398/

Formative and Summative Assessments. Yale Poorvu Center for Teaching and Learning. (2021, June 30). Retrieved March 26, 2023, from https://poorvucenter.yale.edu/Formative-Summative-Assessments

Scriven, M. (1991). Evaluation Thesaurus. Sage. https://files.eric.ed.gov/fulltext/ED214952.pdf

Upcraft, M. L., & Schuh, J. H. (1996). Assessment in Student Affairs: A Guide for Practitioners. The Jossey-Bass Higher and Adult Education Series. Jossey-Bass Inc., Publishers, 350 Sansome St., San Francisco, CA 94104.

Filed Under: Evaluation Methodology Blog

Introducing the Blog!

Introducing the Blog!

October 5, 2023 by Jonah Hall

Introducing the Blog!

From the Faculty of the Evaluation, Statistics, & Methodology PhD Program!

Hello and welcome to our new blog. We are MAD about Methods! As faculty who have been involved in the field of Measurement, Evaluation, Statistics, and Assessment (MESA) for many years, we are excited to introduce you to our new blog: MAD (Meaningful, Action-Driven) with Methods and Measures. Our blog is sponsored by the Evaluation, Statistics, and Methodology program at The University of Tennessee, Knoxville, and our aim is to engage in discussions and enrich scholarly contributions about MESA.

MESA is an interdisciplinary field that involves the application of rigorous quantitative and qualitative methodologies to assess problems in the educational, social, and behavioral sciences. At the core of MESA is the idea of gathering and analyzing data to help policy makers, practitioners, and researchers make informed decisions. The field encompasses a wide range of topics, including educational assessment, program evaluation, psycho-metrics, survey research, qualitative methods, and data science. Through our blog, we hope to provide a platform for scholars and practitioners to share their insights and experiences, and to discuss the latest developments in the field.

Our vision for this blog is to become the go-to place for anyone interested in MESA topics or looking to stay informed about the latest happenings and hot topics in our field. Whether you are a student just starting your journey or an experienced practitioner looking to stay up-to-date with the latest research, we hope that you will find our blog to be a valuable resource.

In addition to providing brief research topics and news, we also hope to use this blog as an opportunity to explore the skills, knowledge, and dispositions required to be successful in the MESA field. We will be highlighting the work of scholars and practitioners who are making a difference in the field and discussing the competencies that have enabled them to achieve success. Each blog will also contain a list of resources on the topic, for readers who are interested in exploring the topic in greater detail.

The Evaluation, Statistics, and Methodology (ESM) program at the University of Tennessee is committed to providing students with the skills and knowledge they need to succeed in the MESA field. We offer two graduate programs, including a residential PhD in Evaluation, Statistics, and Methodology, as well as a completely online MS in Education with a concentration in Evaluation Methodology. Through our blog, we hope to provide emerging and experienced professionals with the insights and guidance they need to excel in their chosen discipline.

We hope you find our MAD blog a valuable platform to come together and discuss the latest developments in the field of evaluation, assessment, and research methodology. We hope you will join us on this journey and go MAD with methods with us!

On behalf of the ESM Faculty:

Louis Rocconi (Pictured), Jennifer Morrow, Leia Cain, Fatima Zahra

Filed Under: Evaluation Methodology Blog

  • « Previous Page
  • 1
  • 2
  • 3
  • 4

Recent Posts

  • ACED Students & Faculty Attend 2025 AAACE Conference
  • Bartlett, McGuigan, & Miller Join ELPS this Fall as New Faculty Members
  • Morrow, Angelle, & Cervantes Recently Return from BELMAS
  • Serving with Purpose: Lessons Learned from Consulting in Assessment and Research
  • Navigating Ambiguity and Asymmetry: from Undergraduate to Graduate Student and Beyond

Recent Comments

No comments to show.

College of Arts & Sciences

117 Natalie L. Haslam Music Center
1741 Volunteer Blvd.
Knoxville TN 37996-2600

Phone: 865-974-3241

Archives

  • November 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • May 2022
  • September 2021
  • August 2021
  • September 2020
  • August 2020
  • June 2020
  • May 2020
  • February 2020
  • November 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • August 2018
  • June 2018
  • May 2018
  • April 2018
  • February 2018
  • December 2017
  • October 2017
  • August 2017

Categories

  • Accolades
  • CEL
  • CSP
  • EDAM
  • Evaluation Methodology Blog
  • Graduate Spotlights
  • HEAM
  • Leadership Studies News
  • News
  • PERC
  • Presentations
  • Publications
  • Research
  • Uncategorized

Copyright © 2025 · UT Knoxville Genesis Child for CEHHS on Genesis Framework · WordPress · Log in

Educational Leadership and Policy Studies

325 Bailey Education Complex
Knoxville, Tennessee 37996

Phone: 865-974-2214
Fax: 865.974.6146

The University of Tennessee, Knoxville
Knoxville, Tennessee 37996
865-974-1000

The flagship campus of the University of Tennessee System and partner in the Tennessee Transfer Pathway.

ADA Privacy Safety Title IX