• Request Info
  • Visit
  • Apply
  • Give
  • Request Info
  • Visit
  • Apply
  • Give

Search

  • A-Z Index
  • Map

Educational Leadership and Policy Studies

  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Programs
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Programs
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
Home » Page 5

Introducing the Evaluation Methodology MS Program at UTK!

Introducing the Evaluation Methodology MS Program at UTK!

Introducing the Evaluation Methodology MS Program at UTK!

March 1, 2024 by Jonah Hall

By Dr. Jennifer Ann Morrow 

Hi everyone! My name is Dr. Jennifer Ann Morrow, and I’m the program coordinator for the University of Tennessee at Knoxville’s new distance education master’s program in Evaluation Methodology. I’m happy to announce that we are currently taking applications for our first cohort that will start in Fall 2024. In a world driven by data, the EM Master’s program gives you the skills to make evidence-based decisions!  

So Why Should You Join Our Program? 

Fully Online Program 

Our new program is designed for the working professional, all courses are fully online and asynchronous which enables students to complete assignments at times convenient for them. Although our courses are asynchronous our faculty offer optional weekly synchronous student hours/help sessions to offer additional assistance and mentorship. Students also participate in both group and individual advising sessions each semester where students will receive mentorship, practical experience suggestions, and career exploration guidance.  

Applied Coursework 

Our 30-credit program is designed to be completed in just under 2 years (5 semesters, only 2 courses per semester!). Each class is designed to include hands-on applied experiences on the entire program evaluation process such as evaluation design, data collection, data analysis, and data dissemination. In their first-year, students will take a two-semester program evaluation course sequence, statistics 1, introduction to qualitative research 1, evaluation designs and data collection methods, and an elective. In their second-year students will take survey research, dissemination evaluation results, and a two-semester evaluation practicum course sequence where they will finalize a portfolio of their evaluation experiences to fulfil their comprehensive exam requirements. If students are unable to take 6 credits a semester, they have up to 6 years to complete their degree if they want to go at a slower pace.  

Experienced Faculty 

Our faculty are experienced educators! All faculty work as evaluators or in a related job such as assessment professional, applied researcher, or psychometrician. They are dedicated faculty that understand what skills and competencies are needed in the evaluation field and ensure that these are focused on within their classes. All are actively involved in their professional organizations (e.g., American Evaluation Association, American Psychological Association, Association for the Assessment of Learning in Higher Education, Association for Institutional Research) and publish their scholarly work in peer-reviewed journals.  

How to Apply 

It’s easy to apply! Go to the UTK Graduate Admissions Portal (https://apply.gradschool.utk.edu/apply/) and fill out your application. You need 2-3 letters of recommendation (complete the contact information and UTK will reach out to them to complete a recommendation), college transcripts, a goals statement (a letter introducing yourself and why you want to join our program) and submit your application fee. No GRE scores are needed! Applications are due by July 1st of each year (though we will review them early if you submit before then!). Tuition is $700 per graduate credit ($775 for out of state). 

 

Contact Me for More Information 

If you have any questions about our program just reach out! 

 

Jennifer Ann Morrow Ph.D.
jamorrow@utk.edu
(865)-974-6117
https://cehhs.utk.edu/elps/people/jennifer-ann-morrow-phd/

Helpful Resources 

Evaluation Methodology Program Website: https://cehhs.utk.edu/elps/evaluation-methodology-ms/  

Evaluation Methodology Program VOLS Online Website: https://volsonline.utk.edu/programs-degrees/education-evaluation-methodology-ms/  

Evaluation Methodology Program Student Handbook: https://cehhs.utk.edu/elps/wp-content/uploads/sites/9/2023/11/EM-MASTERS-HANDBOOK-2023.pdf  

UTK Educational Leadership and Policy Studies Website: https://cehhs.utk.edu/elps/  

UTK Educational Leadership and Policy Studies Facebook Page: https://www.facebook.com/utkelps/?ref=embed_page  

UTK Graduate School Admissions Website: https://gradschool.utk.edu/future-students/office-of-graduate-admissions/applying-to-graduate-school/  

UTK Graduate School Admission Requirements: https://gradschool.utk.edu/future-students/office-of-graduate-admissions/applying-to-graduate-school/admission-requirements/  

UTK Graduate School Application Portal: https://apply.gradschool.utk.edu/apply/  

UTK Distance Education Graduate Fees: https://onestop.utk.edu/wp-content/uploads/sites/9/sites/63/2023/11/Spring-24-GRAD_Online.pdf  

UTK Graduate Student Orientations: https://gradschool.utk.edu/future-students/graduate-student-orientations/  

American Evaluation Association: https://www.eval.org/ 

AEA Graduate Student and New Evaluator TIG: https://www.facebook.com/groups/gsnetig/ 

Filed Under: Evaluation Methodology Blog

Evaluation Capacity Building: What is it, and is a Job Doing it a Good Fit for Me?

Evaluation Capacity Building: What is it, and is a Job Doing it a Good Fit for Me?

February 15, 2024 by Jonah Hall

By Dr. Brenna Butler

Hi, I’m Dr. Brenna Butler, and I’m currently an Evaluation Specialist at Penn State Extension (https://extension.psu.edu/brenna-butler). I graduated from the ESM Ph.D. program in May 2021, and in my current role, a large portion of my job involves evaluation capacity building (ECB) within Penn State Extension. What does ECB specifically look like day-to-day, and is ECB a component of a job that would be a good fit for you? This blog post will cover some of my thoughts and opinions of what ECB may look like in a job in general. Keep in mind that these opinions are exclusively mine, and don’t represent those of my employer.

Evaluation capacity building (ECB) is the process of increasing the knowledge, skills, and abilities of individuals in an organization to conduct quality evaluations. This is often done by evaluators (like me!) providing the tools and information for individuals to conduct sustained evaluative practices within their organization (Sarti et al., 2017). The amount of literature covering ECB is on the rise (Bourgeois et al., 2023), indicating that evaluators taking on ECB roles within organizations may also be increasing. Although there are formal models and frameworks in the literature that describe ECB work within organizations (the article by Bourgeois and colleagues (2023) provides an excellent overview of these), I will cover three specific qualities of what it takes to be involved in ECB in an organization.

ECB Involves Teaching

Much of my role at Penn State Extension is providing mentorship to Extension Educators on how to incorporate evaluation in their educational programming. This mentorship role sometimes looks like a more formal teaching role by conducting webinars and training on topics such as writing good survey questions or developing a logic model. Other times, this mentorship role will take a more informal teaching route when I am answering questions Extension Educators email me regarding data analysis or ways to enhance their data visualizations for a presentation. Enjoying teaching and assisting others in all aspects of evaluations are key qualities of an effective evaluator who leads ECB in an organization.

ECB Involves Leading

Taking on an ECB role involves a large amount of providing guidance and being the go-to expert on evaluation within the organization. Individuals will often look to the evaluator in these positions as to what directions to take in evaluation and assessment projects. This requires speaking up in meetings to advocate for strong evaluative practices (“Let’s maybe not send out a 30-question survey where every single question is open-ended”). Being willing to speak up and go against the norms of “how the organization has always done something” is an area that an evaluator involved in ECB work needs to be comfortable doing.

One way this “we’ve always done it this way” mentality can be tackled by evaluators is through an evaluation community of practice. Each meeting is held around a different evaluation topic area where members of the organization are invited to talk about what has worked well for them and what hasn’t in that area and showcase some of the work they have conducted through collaboration with the evaluator. The intention is that these community of practice meetings that are open to the entire organization can be one way of moving forward with adopting evaluation best practices and leaning less on old habits.

ECB Involves Being Okay with “Messiness”

An organization may invest in hiring an evaluation specialist who can guide the group to better evaluative practices because they lack an expert in evaluation. If this is the case, evaluation plans may not exist, and your role as an evaluator in the organization will be to start from scratch in developing evaluative processes. Alternatively, it could be that evaluations have been occurring in the organization but may not be following best practices, and you will be tasked with leading the efforts to improve these practices.

Work in this scenario can become “messy” in the sense that tracking down historical evaluation data collected before an evaluator was guiding these efforts in the organization can become very difficult. For example, there may not be a centralized location or method to how paper survey data were being stored. One version of the data may involve tally marks on a sheet of paper indicating the number of responses to each question, and another version of the same survey data may be stored in an Excel file with unlabeled rows. These scenarios require adequate discernment by the evaluator if the historical data are worth combing through and combining so that they can be analyzed, or if starting from scratch and collecting new data will ultimately save time and effort. Being part of ECB in an organization involves being up for the challenge of working through these “messy”, complex scenarios.

Hopefully, this provided a brief overview of some of the work done by evaluators in ECB within organizations and can help you discern if a position involving ECB may be in your future (or not!).

 

Links to Explore for More Information on ECB

https://www.betterevaluation.org/frameworks-guides/rainbow-framework/manage/strengthen-evaluation-capacity

https://www.oecd.org/dac/evaluation/evaluatingcapacitydevelopment.htm

http://www.pointk.org/client_docs/tear_sheet_ecb-innovation_network.pdf

https://wmich.edu/sites/default/files/attachments/u350/2014/organiziationevalcapacity.pdf

https://scholarsjunction.msstate.edu/cgi/viewcontent.cgi?article=1272&context=jhse

 

References

Bourgeois, I., Lemire, S. T., Fierro, L. A., Castleman, A. M., & Cho, M. (2023). Laying a solid foundation for the next generation of evaluation capacity building: Findings from an integrative review. American Journal of Evaluation, 44(1), 29-49. https://doi.org/10.1177/10982140221106991

Sarti, A. J., Sutherland, S., Landriault, A., DesRosier, K., Brien, S., & Cardinal, P. (2017). Understanding of evaluation capacity building in practice: A case study of a national medical education organization. Advances in Medical Education and Practice, 761-767. https://doi.org/10.2147/AMEP.S141886

Filed Under: Evaluation Methodology Blog

Supporting Literacy Teachers with Actionable Content-Based Feedback

Supporting Literacy Teachers with Actionable Content-Based Feedback

February 6, 2024 by Jonah Hall

By Dr. Mary Lynne Derrington & Dr. Alyson Lavigne 

Please Note: This is Part 3 of a four-part series on actionable feedback. Stay tuned for the next posts that will focus on Leadership Content Knowledge (LCK) and teacher feedback in the areas of STEM, Literacy, and Early Childhood Education.

Missed the beginning of the series? Click here to read Part 1
on making teacher feedback count!

A strong literacy foundation in students’ early years is critical for success in their later ones. School leadership plays a significant part in establishing this foundation by equipping teachers with the right professional development.

Many (but not all) school leaders are versed in effective literacy instruction. Given its foundational importance, it is wise for principals — and others who observe and mentor teachers — to leverage the key elements of effective literacy instruction in the observation cycle. In this blog post, we outline two ways to do so.

Jan Dole, Parker Fawson, and Ray Reutzel suggest that one way to use research-based supervision and feedback practices in literacy instruction is to include in the observation cycle tools, guides, and checklists that specifically focus on literacy instruction, such as:

  • The Protocol for Language Arts Teaching Observations (PLATO; Grossman, 2013)
  • The Institute of Education Sciences’ (IES) K-3 School Leader’s Literacy Walkthrough Guide (Kosanovich et al., 2015)
  • The Institute of Education Sciences’ (IES): Grades 4-12 School Leaders Literacy Walkthrough Guide (Lee et al., 2020)

These tools highlight key concepts or what can be called “look-fors” of literacy rich environments by using a rubric or checklist. Some examples follow:

  • Strategy Use and Instruction: The teacher’s ability to teach strategies and skills that supports students in reading, writing, speaking, listening, and engaging with literature (PLATO)
  • Literacy Texts: Retell familiar stories, including key details (IES K-3; Kosanovich et al., 2015)
  • Vocabulary and Advanced Word Study: Explicit instruction is provided in using context clues to help students become independent vocabulary learners using literary and content area text (IES 4-12; Lee et al., 2020)

A second way is to develop professional learning communities (PLCs) to extend literacy supervision and feedback. Successful literacy-focused PLCs:

  • Establish a shared literacy mission, vision, values, and goals,
  • engage in regular collective inquiry on evidence-based literacy practices, and
  • promote continuous literacy instruction improvement among staff.

These strategies can be used by school leaders or complement the work of a school literacy coach. Ready to create a learning community in your school or district? Read KickUp’s tips for setting PLCs up for success.

This blog entry is part of a four-part series on actionable feedback. Stay tuned for our next post that will focus on concrete ways to provide feedback to Early Childhood Education teachers.

If this blog has sparked your interest and you want to learn more, check out our book, Actionable Feedback to PK-12 Teachers. And for other suggestions on supervising teachers in literacy, see Chapter 9 by Janice A. Dole, Parker C. Fawson, and D. Ray Reutzel.

Filed Under: News

Timing is Everything… Or Is It? How Do We Incentivize Survey Participation?

Timing is Everything… Or Is It? How Do We Incentivize Survey Participation?

February 1, 2024 by Jonah Hall

By M. Andrew Young

Hello! My name is M. Andrew Young. I am a second-year Ph.D. student in the Evaluation, Statistics, and Methodology Ph.D. program here at UT-Knoxville. I currently work in higher education assessment as a Director of Assessment at East Tennessee State University’s college of Pharmacy.

Let me tell you a story; and you are the main character!

4:18pm Friday Afternoon:

Aaaaaand *send*.

You put the finishing touches on your email. You’ve had a busy, but productive day. Your phone buzzes. You reach down to the desk and turn on the screen to see a message from your friends you haven’t seen in a while.

Tonight still good?

“Oh no! I forgot!” You tell yourself as you flop back in your office chair. “I was supposed to bring some drinks and a snack to their house tonight.”

As it stands – you have nothing.

You look down at your phone while you recline in your office chair, searching for “grocery stores near me.” You find the nearest result and bookmark it for later. You have a lot left to do, and right now, you can’t be bothered.

Yes! I am super excited! When is everyone arriving? You type hurriedly in your messaging app and hit send.

You can’t really focus on anything else. One minute passes by and your phone lights up again with the notification of a received text message.

Everyone is getting here around 6. See you soon!

Thanks! Looking forward to it!

You lay your phone down and dive back into your work.

4:53pm:

Work is finally wrapped up. You pack your laptop into your backpack, grab a stack of papers, joggle them on your desk to get them at least a little orderly before you jam them in the backpack. You shut your door and rush to your vehicle. You start your car, navigate to the grocery store you bookmarked earlier.

“17 minutes to your destination,” your GPS says.

5:12pm:

It took two extra minutes to arrive because, as usual, you caught the stoplights on the wrong rotation. You finally find a parking spot, shuffle out of your car and head toward the entrance.

You freeze for a moment. You see them.

You’ve seen them many times, and you always try to avoid them. You know there is going to be the awkward interaction of a greeting, a request of some sort; usually for money. Your best course of action is to ignore them. Everyone knows that you hear them, but it is a small price to pay in your hurry.

Sure enough, “Hello! Can you take three minutes of your time to answer a survey? We’ll give you a free t-shirt for your time!”

You shoot them a half smile and a glance as you pick up your pace and rush past the pop-up canopy and table stacked with items you barely pay attention to as you pass.

Shopping takes longer than you’d hoped. The lines are long at this time of day. You don’t have much, just an armful of goods, but no matter, you must wait your turn. Soon, you make your way out of the store to be unceremoniously accosted again.

5:32pm:

You have to drive across town. Now, you won’t even have enough time to go home and change before your dinner engagement. You rush towards the door. The sliding doors part as you pass through the entrance, right by them.

“Please! If you will take three minutes, we will give you a T-shirt. We really want your opinion on an important matter in your community!”

You gesture with your hand and explain, “I’m sorry, but I’m in a terrible rush!”

——————————————————————————————–

So, what went wrong for the survey researchers? Why didn’t you answer the survey? They were at the same place at the same time as you. They offered you an incentive to participate. They specified that it was only going to take three minutes of your time to complete. So, why did you brush them off as you have many other charities and survey givers in the past situated in front of your store of choice?

Oftentimes, we are asked for our input, or our charity, but before we even receive the first invitation, we have already determined that we will not participate. Why? In this scenario, you were in a hurry. The incentive they were offering was not motivating to you.

Would it have changed your willingness to participate if they offered a $10 gift card to the store you were visiting? Maybe, maybe not.

The question is, more and more, how do we incentivize participation in a survey? Paper, online, person-to-person. They are all suffering by the conundrum of falling response rates (Lindgren et al., 2020). This impacts the validity of your research study. How can you ensure that you are getting heterogeneous sampling from populations? How can you be sure that you are getting the data you need from the people you want to sample? This can be a challenge.

In recent published works on survey incentives, many studies acknowledge that time and place affects participation, but we don’t quite understand how. Some studies, such as Lindgren et al. (2020), have tried to determine the time of day and day of week to invite survey participants, but they themselves discuss the limitations in their study, which is endemic to many studies, which is the lack of heterogeneity of participants and the interplay of response and nonresponse bias:

While previous studies provide important empirical insights into the largely understudied role of timing effects in web surveys, there are several reasons why more research on this topic is needed. First, the results from previous studies are inconclusive regarding whether the timing of the invitation e-mails matter in web survey modes (Lewis & Hess, 2017, p. 354). Secondly, existing studies on timing effects in web surveys have mainly been conducted in an American context, with individuals from specific job sectors (where at least some can be suspected to work irregular hours and have continuous access to the Internet). This makes research in other contexts than the American, and with more diverse samples of individuals, warranted (Lewis & Hess, 2017, p. 361; Sauermann & Roach, 2013, p. 284). Thirdly, only the Lewis and Hess (2017), Sauermann and Roach (2013), and Zheng (2011) studies are recent enough to provide dependable information to today’s web survey practitioners, due to the significant, and rapid changes in online behavior the past decades. (p. 228)

Timing, place/environment, and matching the incentive to the situation and participant (and maybe even topic, if possible) is influential in improving response rates. Best practices indicate that pilot testing survey items can help create a better survey, but how about finding what motivates your target population to even agree to begin the survey in the first place? That is less explored, and I think is an opportunity for further study.

This gets even harder when you are trying to reach hard-to-reach populations. Many times, it takes a variety of approaches, but what is less understood, is how to decide on your initial approach. The challenge that other studies have run into, and something that I think will continue to present itself as a hurdle is this: because of the lack of research on timing and location, and because of the lack of heterogeneity in the studies that do exist, the generalizability of studies is limited, if not altogether impractical. So, that leads me full-circle back to pilot-testing incentives and timing for surveys. Get to know your audience!

Cool Citations to Read:

Guillory, J., Wiant, K. F., Farrelly, M., Fiacco, L., Alam, I., Hoffman, L., Crankshaw, E., Delahanty, J., & Alexander, T. N. (2018). Recruiting Hard-to-Reach Populations for Survey Research: Using Facebook and Instagram Advertisements and In-Person Intercept in LGBT Bars and Nightclubs to Recruit LGBT Young Adults. J Med Internet Res, 20(6), e197. https://doi.org/10.2196/jmir.9461

Lindgren, E., Markstedt, E., Martinsson, J., & Andreasson, M. (2020). Invitation Timing and Participation Rates in Online Panels: Findings From Two Survey Experiments. Social Science Computer Review, 38(2), 225–244. https://doi.org/10.1177/0894439318810387

Robinson, S. B., & Leonard, K. F. (2018). Designing Quality Survey Questions. SAGE Publications, Inc. [This is our required book in Survey Research!]

Smith, E., Loftin, R., Murphy-Hill, E., Bird, C., & Zimmermann, T. (2013). Improving developer participation rates in surveys. 2013 6th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE), 89–92. https://doi.org/10.1109/CHASE.2013.6614738

Smith, N. A., Sabat, I. E., Martinez, L. R., Weaver, K., & Xu, S. (2015). A Convenient Solution: Using MTurk To Sample From Hard-To-Reach Populations. Industrial and Organizational Psychology, 8(2), 220–228. https://doi.org/10.1017/iop.2015.29

Neat Websites to Peek At:

https://blog.hubspot.com/service/best-time-send-survey (limitations, again, no demographics understanding, they did say to not send it in high-volume work times, but not everyone works the same type of M-F 8:00am-4:30pm job)

https://globalhealthsciences.ucsf.edu/sites/globalhealthsciences.ucsf.edu/files/tls-res-guide-2nd-edition.pdf (this is targeted directly towards certain segments of hard-to-reach populations. Again, generalizability challenges, but the idea is there)

Filed Under: Evaluation Methodology Blog

Dueñas Highlighted as a 2024 Emerging Scholar by Diverse Issues in Higher Education

Dueñas Highlighted as a 2024 Emerging Scholar by Diverse Issues in Higher Education

January 31, 2024 by Jonah Hall

Courtesy of the College of Education, Health, and Human Sciences

Mary Dueñas is passionate about student success, especially among underrepresented and marginalized student populations. Because of her passion for students to thrive in a higher education environment, she dedicates a large portion of her scholarship research to examine equity and access issues in higher education.

Mary Dueñas

Mary Dueñas

Her work hasn’t gone unnoticed. Just recently, Diverse Issues in Higher Education named Dueñas “An Equity and Access Champion” in their January 18th, 2024, issue and has named her a Top 15 Emergent Scholar. The publication highlights emerging scholars making an impact on education on college campuses nationwide.

“Receiving this national recognition is wonderful, and I’m honored to share this platform with other outstanding scholars from different disciplines,” said Dueñas.

Dueñas is an assistant professor in the department of Educational Leadership and Policy Studies (ELPS) at the University of Tennessee, Knoxville, College of Education, Health, and Human Sciences (CEHHS). In addition, she serves as program coordinator for the master’s student personnel program in College Student Personnel (CSP).

Using both quantative and qualitative research methods, Dueñas focuses on Latina/o/x/e  college students’ sense of belonging and their experience with imposter syndrome. She uses holistic frameworks and critical theory to share stories and explain systemic inequities that marginalized communities face in higher education.

“My research examines the ways in which larger social processes affect students and their overall well-being while also addressing underrepresented and marginalized students in relation to retention and success,” said Dueñas.

Cristobal Salinas, Jr., an associate professor of educational leadership and research methodology at Florida Atlantic University, nominated her for this prestigious national recognition. In his nomination letter, Salinas commended Dueñas for her commitment to scholarship that pushes the boundaries of higher education through novel perspectives and an innovative approach to research.

“This commitment to pioneering scholarship has been complemented by her unwavering dedication to teaching and mentoring the next generation of scholars, which is an integral part of her academic mission, explains Salinas.

Despite having a full plate at CEHHS, Dueñas has authored several peer-reviewed journal articles, been a guest on a podcast, and has several works she is authoring or co-authoring under review. One is “Síndrome del impostor: The Impact of the COVID-19 Pandemic on Latinx College Students’ Experiences with Imposter Syndrome.” She is co-authoring “Culturally Responsive Mentoring: A Psychosociocultural Perspective on Sustaining Students of Color Career Aspirations in STEM”.

Dueñas takes a glass-half-full approach to her work, focusing on the whole student. In other words, she says it’s about the positives that make a student’s experience successful and asking questions about what works.

“There is a changing landscape in how we think about higher education,” Dueñas says. “It’s not so much about the students adapting to higher education, it’s more about how higher education institutions supporting and serving students.”

Filed Under: News

Supporting STEM Teachers with Actionable Content-Based Feedback

Supporting STEM Teachers with Actionable Content-Based Feedback

January 25, 2024 by Jonah Hall

By Dr. Mary Lynne Derrington & Dr. Alyson Lavigne 

Please Note: This is Part 2 of a four-part series on actionable feedback. Stay tuned for the next posts that will focus on Leadership Content Knowledge (LCK) and teacher feedback in the areas of STEM, Literacy, and Early Childhood Education.

Missed the beginning of the series? Click here to read Part 1
on making teacher feedback count!

For school leaders, providing teachers with feedback in unfamiliar subject areas can be a challenge. At the same time, we know that teachers highly value feedback on their teaching content area as well as general pedagogical practices. When school leaders deepen their understanding of different subjects, it can prove a powerful lever to giving teachers the feedback they deserve and desire. Today, we’ll discuss ways to support teachers in the STEM (Science, Technology, Engineering and Math) area.

Imagine you are scheduled to observe a STEM lesson, an area where you might not feel confident. What might be some ways to prepare for this observation? Sarah Quebec Fuentes, Jo Beth Jimerson, and Mark Bloom recommend post-holing. In the context of building, this refers to digging holes deep enough to anchor fenceposts. As it pertains to your work, post-holing means engaging in an in-depth, but targeted exploration of the content area.

Another strategy is joining a STEM instructional coach or specialist for an observation and debrief. A third way to learn is to attend a STEM-focused professional development for teachers. These activities can help you think more deeply about the content and how it is taught.

In addition, you can identify subject-specific best practices to integrate into a pre-observation or post-observation conversation. This might look like adapting a subset of evaluation questions to specifically reflect STEM objectives. For example:

  1. Poses scenarios or identifies a problem that students can investigate (Bybee, et al., 2006).
  2. Fosters “an academically safe classroom [that] honors the individual as a mathematician and welcomes him or her into the social ecosystem of math” (Krall, 2018).
  3. Avoids imprecise language and overgeneralized tips or tricks (e.g., carry, borrow, FOIL) and instead use precise mathematical language grounded in conceptual mathematical understanding (e.g., trade, regroup, distributive property) (Karp et al., 2014, 2015).
  4. Uses models to communicate complex scientific concepts, emphasizing that models are only approximations of the actual phenomena and are limited simplifications used to explain them (Krajcik & Merritt, 2013).

Let’s imagine a meaningful mathematical talk emerges as an important practice from your post-holing in mathematics. In a pre-observation you might ask the teacher about their plans for creating meaningful mathematical talk in the lesson. During the observation, you can note if those questions appeared and/or when moments of meaningful mathematical talk were taking place. In a post-observation, you might ask teachers to reflect upon the moments they felt meaningful mathematical talk was occurring, and what inputs yielded those outcomes.

This blog entry is part of a four-part series on actionable feedback. Stay tuned for our next two posts that will focus on Leadership Content Knowledge (LCK) on concrete ways to provide feedback to teachers in the areas of, and Early Childhood Education.

If this blog has sparked your interest and you want to learn more, check out our book, Actionable Feedback to PK-12 Teachers. And for other suggestions on supervising teachers in STEM discipline areas with specific pre-observation and post-observation prompts and key practices for observation, see Chapter 8 by Sarah Quebec Fuentes, Jo Beth Jimerson, and Mark A. Bloom.

Filed Under: News

Making the Most of Your Survey Items: Item Analysis

Making the Most of Your Survey Items: Item Analysis

January 15, 2024 by Jonah Hall

By Louis Rocconi, Ph.D. 

Hi, blog world! My name is Louis Rocconi, and I am an Associate Professor and Program Coordinator in the Evaluation, Statistics, and Methodology program at The University of Tennessee, and I am MAD about item analysis. In this blog post, I want to discuss an often overlooked tool to examine and improve survey items: Item Analysis.

What is Item Analysis?

Item analysis is a set of techniques used to evaluate the quality and usefulness of test or survey items. While item analysis techniques are frequently used in test construction, these techniques are helpful when designing surveys as well. Item analysis focuses on individual items rather than the entire set of items (such as Cronbach’s alpha). Item analysis techniques can be used to identify how individuals respond to items and how well items discriminate between those with high and low scores. Item analysis can be used during pilot testing to help choose the best items for inclusion in the final set. While there are many methods for conducting item analysis, this post will focus on two methods: item difficulty/endorsability and item discrimination.

Item Difficulty/Endorsability

Item difficulty, or item endorsability, is simply the mean, or average, response (Meyer, 2014). For test items that have a “correct” response, we use the term item difficulty, which refers to the proportion of individuals who answered the item correctly. However, when using surveys with Likert-type response options (e.g., strongly disagree, disagree, agree, strongly agree), where there is no “correct” answer, we can think of the item mean as item endorsability or the extent to which the highest response option is endorsed. We often divide the mean, or average response, by the maximum possible response to put endorsability on the same scale as difficulty (i.e., ranging from 0 to 1).

A high difficulty (i.e., close to 1) indicates an item that is too easy, while a low difficulty value (i.e., close to 0) suggests an overly difficult item or an item that few respondents endorse. Typically, we are looking for difficulty values between 0.3 and 0.7. Allen and Yen (1979) argue this range maximizes the information a test provides about differences among respondents. While Allen and Yen were referring to test items, surveys with Likert-type response options generally follow the same recommendations. An item with a low endorsability indicates that people are having a difficult time endorsing the item or selecting higher response options such as strongly agree. Whereas, an item with a high endorsability indicates that the item is easy to endorse. Very high or very low values for difficulty/endorsability may indicate that we need to review the item. Examining proportions for each response option is also useful. It demonstrates how frequently a response category was used. If a response category is not used or only selected by a few respondents, this may indicate that the item is ambiguous or confusing.

Item Discrimination

Item discrimination is a measure of the relationship between scores on an item and the overall score on the construct the survey is measuring (Meyer, 2014). It measures the degree to which an item differentiates individuals who score high on the survey from those who score low on the survey. It aids in determining whether an item is positively or negatively correlated with the total performance. We can think of item discrimination as how well an item is tapping into the latent construct. Discrimination is typically measured using an item-total correlation to assess the relationship between an item and the overall score. Pearson’s correlation and its variants (i.e., point-biserial correlation) are the most common, but other types of correlations such as biserial and polychoric correlations can be used.

Meyer (2014) suggests selecting items with positive discrimination values between 0.3 and 0.7 and items that have large variances. When the item-total correlation exceeds 0.7, it suggests the item may be redundant. A content analysis or expert review panel could be used to help decide which items to keep. A negative discrimination for an item suggests that the item is negatively related with the total score. This may suggest a data entry error, a poorly written item, or that the item needs to be reverse coded. Whatever the case, negative discrimination is a flag to let you know to inspect that item. Items with low discrimination tap into the construct poorly and should be revised or eliminated. Very easy or very difficult items can also cause low discrimination, so it is good to check whether that is a reason as well. Examining discrimination coefficients for each response option is also helpful. We typically want to see a pattern where lower response options (e.g., strongly disagree, disagree) have negative discrimination coefficients and higher response options (e.g., agree, strongly agree) have positive correlations and the magnitude of the correlations is highest at the ends of the response scale (we would look for the opposite pattern if the item is negatively worded).

Conclusion

Item difficulty/endorsability and item discrimination are two easy techniques researcher can use to help improve the quality of their survey items. These techniques can easily be implemented when examining other statistics such as internal consistency reliability.

___________________________________________________________________

References

Allen, M. & Yen, W. (1979). Introduction to measurement theory. Wadsworth.

Meyer, J. P. (2014). Applied measurement with jMetrik. Routledge.

Resources

I have created some R code and output to demonstrate how to implement and interpret an item analysis.

The Standards for Educational and Psychological Testing

Filed Under: Evaluation Methodology Blog

Education, Leadership, and Policy Studies Researcher Recognized by Education Week

Education, Leadership, and Policy Studies Researcher Recognized by Education Week

January 4, 2024 by Jonah Hall

Courtesy of the College of Education, Health, and Human Sciences (January 4, 2024)

Rachel White’s Superintendent Research is a Top-10 Education Study for 2023

2023 has been quite the year for Rachel White, an assistant professor in the department of Educational Policy and Leadership Studies She’s been nationally recognized for her early-career work in the field of educational leadership with the Jack A. Culbertson Award from the University Council For Educational Administration. She’s also been selected to serve on a United States Department of Education Regional Advisory Committee to provide advice and recommendations concerning the educational needs in the Appalachian region and how those needs can be most effectively addressed. However, her research into superintendent attrition and gender gaps has put her in the national spotlight.

Rachel White sits on a wooden credenza in front of a dark blue wall. She has fair skin, long blonde hair, and is wearing a black blouse, black pants, and black high-heeled shoes.

Rachel White

Recently, Education Week named White’s study on attrition and gender gaps among K-12 district superintendents as a Top-10 Educational Study of 2023. First published in the journal Educational Researcher, one way that White’s research demonstrates the magnitude of the gender gap is through superintendent first names. She finds that one out of every five superintendents in the United States is named Michael, David, James, Jeff, John, Robert, Steven, Chris, Brian, Scott, Mark, Kevin, Jason, Matthew, or Daniel. In fact, Education Week and Ed Surge brought the story to national attention with the articles “There’s a Good Chance Your Superintendent Has One of These 15 Names” and “What Are the Odd’s Your Superintendent is Named Michael, John, or David?”

In order to diversify the superintendency, women superintendents must be hired to replace outgoing men. However, drawing on the most recent data update of her National Longitudinal Superintendent Database, White recently published a data brief showing that over the last five years, 50% of the time a man turned over, he was replaced by another man, and a woman replaced a woman 10% of the time. A man replaced a woman 18% of the time, and a woman replaced a man 22% of the time.

When thinking about the importance of this research, White shared “Nearly ten years ago, the New York Times reported a similar trend among large companies: more S&P 1500 firms were being run by men named John than women, in total. The emulation of this trend in the K12 education sector, in 2024, is alarming. Public schools are often touted as “laboratories of democracy”: places where young people learn civic engagement and leadership skills to participate in a democratic society. Yet, what young people see in K12 public schools is that leadership positions—the highest positions of power in local K-12 education institutions—are primarily reserved for men.”

One thing is for certain, we have a way to go when it comes to balanced gender representation in school district leadership. White’s research has shown that, while over 75 percent of teachers and 56 percent of principals are women, the pace at which the superintendent gender gap is closing feels glacial: the current 5-year national average for gender gap closure rate is 1.4 percentage points per year. At this rate, the estimated year of national gender equality in the superintendency is 2039.

“Superintendents are among the most visible public figures in a community, interfacing with students, educators, families, business, and local government officials on a daily basis,” White shared. “A lack of diversity in these leadership positions can convey that a district is unwelcoming of diverse leaders that bring valuable insights and perspectives to education policy and leadership work.”

White continued, “Not only do we need to recruit and hire diverse leaders to the superintendency, but school boards and communities need to be committed to respecting, valuing, and supporting diverse district superintendents. New analyses of the updated NLSD show that women’s’ attrition rates spiked from 16.8% to 18.2% over the past year, while men’s remained stable around 17% for the past three years. We need to really reflect and empirically examine why this pattern has emerged, and what school boards, communities, and organizations and universities preparing and supporting women leaders can do to change this trajectory.”

 White has doubled down on her commitment to establishing rigorous and robust research on superintendents with the launch of The Superintendent Lab—a hub for data and research on school district superintendency. In fact, The Superintendent Lab is home to The National Longitudinal Superintendent Database, with data on over 12,500 superintendents across the United States, updated annually. With the 2023-24 database update completed, the NLSD now houses over 65,000 superintendent-year data points. The database allows the lab team to learn more about issues related to superintendent labor markets over time, and even produce interactive data visualizations for the public to better understand trends in superintendent gender gaps and attrition.

Along with a team of 10 research assistants and lab affiliates, White hopes to foster a collaborative dialog among policy leaders which may lead to identifying ways to create a more inclusive and equitable K-12 school systems.

“A comprehensive understanding of the superintendency in every place and space in the United States has really never been prioritized or pursued. My hope is that, through The Superintendent Lab, and the development of rigorous and robust datasets and research, I can elevate data-driven dialogue to advance policies and practices that contribute more equitable and inclusive spaces in education. And, along the way, I am passionate about the Lab being a space for students from all levels to engage in meaningful research experiences – potentially igniting a spark in others to use their voice and pursue opportunities that will contribute to great equity and inclusion in K12 education leadership,” said White.

Filed Under: News

Kelchen Once Again Named Top Scholar Influencer

Kelchen Once Again Named Top Scholar Influencer

January 4, 2024 by Jonah Hall

Courtesy of the College of Education, Health, and Human Sciences (January 4, 2024)

We’ve all heard the term, “influencer.” Many of us associate an influencer as someone with a large following on social media, such as Instagram or YouTube, who set trends or promotes products. But did you know that there are a select group of scholar influencers who help shape educational practice and policy?

Robert Kelchen stands in front of the

Robert Kelchen

One of those scholar influencers is Robert Kelchen, who serves as department head of Educational Leadership and Policy Studies (ELPS) at the University of Tennessee, Knoxville, College of Education, Health, and Human Sciences (CEEHHS).  Kelchen is ranked 41 out of 20,000 scholars nationwide in Education Week’s Edu-Scholar Public Influence Rankings for 2024. In fact, Kelchen is the only scholar from the University of Tennessee, Knoxville, to make the list.

“As a faculty member at a land-grant university, it is my job to help share knowledge well beyond the classroom or traditional academic journals,” said Kelchen. I am thrilled to have the opportunity to work with policymakers, journalists, and college leaders on a regular basis to help improve higher education.”

For 14 years, Education Week selects the top-200 scholars (out of an eligible pool of 20,000) from across the United States as having the most influence on issues and policy in education. The list is compiled by opinion columnist Rick Hess, resident scholar at the American Enterprise Institute and director of Education Policy Studies.

The selection process  includes a 38-member Selection Committee made up of university scholars representing public and private institutions from across the United States. The Selection Committee calculates scores including, Google Scholar Scores, Book Points, Amazon Rankings, Congressional Record mentions, media, and web appearances and then ranks the scholar accordingly.  Kelchen is considered a “go-to” source for reporters covering issues in higher education, with over 200 media interviews, year after year. If there is a story about higher education in the media, you’ll more than likely find a quote from Kelchen as an expert source.

“In the last year, I have had the pleasure of supporting several states on their higher education funding models, presenting to groups of legislators, and being a resource to reporters diving into complex higher education finance topics. These engagements help strengthen my own research and give me the opportunity to teach cutting-edge classes to ELPS students,” said Kelchen.

In addition, Kelchen received national recognition by the Association for the Study of Higher Education (ASHE) for his research on higher education finance, accountability policies and practices, and student financial aid. ASHE’s Council on Public Policy in Higher Education selected Kelchen for its Excellence in Public Policy Higher Education Award.

Through its eight departments and 12 centers, the UT Knoxville College of Education, Health, and Human Sciences enhances the quality of life for all through research, outreach, and practice. Find out more at cehhs.utk.edu

Filed Under: News

Are Evaluation PhD Programs Offering Training in Qualitative and Mixed Design Methodologies

Are Evaluation PhD Programs Offering Training in Qualitative and Mixed Design Methodologies

January 1, 2024 by Jonah Hall

By Kiley Compton

Hello! My name is Kiley Compton and I am a fourth-year doctoral student in UT’s Evaluation, Statistics, and Methodology (ESM) program. My research interests include program evaluation, research administration, and sponsored research metrics.  

One of the research projects I worked on as part of the ESM program examined curriculum requirements in educational evaluation, assessment, and research (EAR) doctoral programs.  Our team was comprised of first- and second-year ESM doctoral students with diverse backgrounds, research interests, and skill sets.  

An overwhelming amount of preliminary data forced us to reconsider the scope of the project. The broad focus of the study was not manageable, so we narrowed the scope and focused on the prevalence of mixed method and qualitative research methodology courses offered in U.S. PhD programs.  Experts in the field of evaluation encourage the use of qualitative and mixed method approaches to gain an in-depth understanding of the program, process, or policy being evaluated (Bamberger, 2015; Patton, 2014).  The American Evaluation Association developed a series of competencies to inform evaluation education and training standards, which includes competency in “quantitative, qualitative, and mixed designs” methodologies (AEA, 2018). Similarly, Skolits et al. (2009) advocate for professional training content that reflects the complexity of evaluations.  

This study was guided by the following research question: what is the prevalence of qualitative and mixed methods courses in Educational Assessment, Evaluation, and Research PhD programs? Sub-questions include 1) to what extent are the courses required, elective, or optional? and 2) to what extent are these courses offered at more advanced levels? For the purpose of this study, elective courses are those that fulfill a specific, focused requirement, while optional courses are those that are offered but do not fulfill elective requirements.  

Methods 

This study focused on PhD programs similar to UT’s ESM program. PhD programs from public and private institutions were selected based on the U.S. Department of Education’s National Center for Education Statistics (NCES) Classification of Instructional Programs (CIP) assignment. Programs under the 13.06 “Educational Assessment, Evaluation, and Research” CIP umbrella were included.  We initially identified a total of 50 programs. 

Our team collected and reviewed available program- and course-level data from program websites, handbooks, and catalogs, and assessed which elements were necessary to answer the research questions. We created a comprehensive data code book based on agreed upon definitions and met regularly throughout the data collection process to assess progress, discuss ambiguous data, and refine definitions as needed. More than 14 program-level data points were collected, including program overview, total credit hours required, and number of dissertation hours required. Additionally, available course data were collected, including course number, name, type, level, requirement level, description, and credit hours. While 50 programs were identified, only 36 of the 50 programs were included in the final analysis due to unavailable or incomplete data. After collecting detailed information for the 36 programs, course-level information was coded based on the variables of interest: course type, course level, and requirement level.  

Results 

​​​Prevalence of qualitative & mixed methods courses 

The team analyzed data from 1,134 courses representing 36 programs, both in aggregate and within individual programs. Results show that only 14% (n=162) of the courses offered or required to graduate were identified as primarily qualitative and only 1% (n=17) of these courses were identified as mixed methods research (MMR). Further, only 6% (n=70) of these courses were identified as evaluation courses (Table 1). Out of 36 programs, three programs offered no qualitative courses. Qualitative courses made up somewhere between 1% and 20% of course offerings for 28 programs. Only five of the programs reviewed exceeded 20%. Only 12 programs offered any mixed methods courses and MMR courses made up less than 10% of the course offerings in each of those programs. 

Table 1. 

Aggregate Course Data by Type and Representation


Course Type                                        n (%)                            Program Count


Quantitative Methods                         409 (36%)                        36 (100%)

Other                                                  317 (28%)                        36 (100%)

Qualitative Methods                           162 (14%)                        33 (92%)

Research Methods                             159 (14%)                       36 (100%)

Program Evaluation                            70 (6%)                           36 (100%)

Mixed Methods                                    17 (1%)                          12 (33%)


Total                                                    1,134 (100%)                         –

 

Requirement level of qualitative and mixed method courses 

Out of 162 qualitative courses, 41% (n=66) were listed as required, 43% (n=69) were listed as elective, and 16% (n=26) were listed as optional (figure 2). Out of 17 mixed methods research courses, 65% (n=11) were listed as required and 35% (n=6) were listed as elective.  

Course level of qualitative and mixed-method courses 

Out of 162 qualitative courses, 73% (n=118) were offered at an advanced level and 27% (n=73) were offered at an introductory level. Out of 17 mixed methods research courses, 71% (n=12) were offered at an advanced level and 29% (n=5) were offered at an introductory level. 

Discussion 

Findings from the study provide valuable insight into the landscape of doctoral curriculum in Educational Assessment, Evaluation, and Research programs. Both qualitative and mixed methods courses were underrepresented in the programs analyzed. However, the majority of course offerings were required and classified as advanced.​​​​ Given that various methodologies are needed to conduct rigorous evaluations, it is our hope that these findings will encourage doctoral training programs to include more courses on mixed and qualitative methods, and that they will encourage seasoned and novice evaluators to seek out training on these methodologies.  

This study highlights opportunities for collaborative work in the ESM program and ESM faculty’s commitment to fostering professional development.  The project began as a project for a research seminar. ESM faculty mentored us through proposal development, data collection and analysis, and dissemination. They also encouraged us to share our findings at conferences and in journals and helped us through the process of drafting and submitting abstracts and manuscripts. Faculty worked closely with our team through every step of the process, serving as both expert consultants and supportive colleagues.  

The study also highlights how messy data can get. Our team even affectionately nicknamed the project “​​messy MESA,” owing to challenges, including changes to the scope, missing data, and changes to the team as students left and joined, along with the common acronym for measurement, evaluation, statistics, and assessment (MESA). While I hope that the product of our study will contribute to the fields of evaluation, assessments, and applied research, the process has made me a better researcher.  

References 

American Evaluation Association. (2018.). AEA evaluator competencies. https://www.eval.org/About/Competencies-Standards/AEA-Evaluator-Competencies  

Bamberger, M. (2015). Innovations in the use of mixed methods in real-world evaluation. Journal of Development Effectiveness, 7(3), 317–326. https://doi.org/10.1080/19439342.2015.1068832 

Capraro, R. M., & Thompson, B. (2008). The educational researcher defined: What will future researchers be trained to do? The Journal of Educational Research, 101, 247-253. doi:10.3200/JOER.101.4.247-253 

Dillman, L. (2013). Evaluator skill acquisition: Linking educational experiences to competencies. The American Journal of Evaluation, 34(2), 270–285. https://doi.org/10.1177/1098214012464512 

Engle, M., Altschuld, J. W., & Kim, Y. C. (2006). 2002 Survey of evaluation preparation programs in universities: An update of the 1992 American Evaluation Association–sponsored study. American Journal of Evaluation, 27(3), 353-359.  

LaVelle, J. M. (2020). Educating evaluators 1976–2017: An expanded analysis of university-based evaluation education programs. American Journal of Evaluation, 41(4), 494-509. 

LaVelle, J. M., & Donaldson, S. I. (2015). The state of preparing evaluators. In J. W. Altschuld & M.Engle (Eds.), Accreditation, certification, and credentialing: Relevant concerns for U.S. evaluators. New Directions for Evaluation,145, 39–52. 

Leech, N. L., & Goodwin, L. D. (2008). Building a methodological foundation: Doctoral-Level methods courses in colleges of education. Research in the Schools, 15(1). 

Leech, N. L., & Haug, C. A. (2015). Investigating graduate level research and statistics courses in schools of education. International Journal of Doctoral Studies, 10, 93-110. Retrieved from http://ijds.org/Volume10/IJDSv10p093-110Leech0658.pdf 

Levine, A. (2007). Educating researchers. Washington, DC: The Education Schools Project. 

Mathison, S. (2008). What is the difference between evaluation and research—and why do we care. Fundamental Issues in Evaluation, 183-196. 

McAdaragh, M. O., & LaVelle, J. M., & Zhang, L. (2020). Evaluation and supporting inquiry  

courses in MSW programs. Research on Social Work Practice, 30(7), 750-759.  

doi:10.1177/1049731520921243 

McEwan, H., & Slaughter, H. (2004). A brief history of the college of education’s doctoral  

degrees. Educational Perspectives, 2(37), 3-9. Retrieved from  

https://files.eric.ed.gov/fulltext/EJ877606.pdf 

National Center for Education Statistics. (2020). The Classification of Instructional Programs [Data set]. https://nces.ed.gov/ipeds/cipcode/default.aspx?y=56.  

Page, R. N. (2001). Reshaping graduate preparation in educational research methods: One school’s experience. Educational Researcher, 30(5), 19-25. 

Patton, M.Q. (2014). Qualitative evaluation and research methods (4th ed.). Sage Publications. 

Paul, C. A. (n.d.). Elementary and Secondary Education Act of 1965. Social Welfare History  

Project. Retrieved from  

https://socialwelfare.library.vcu.edu/programs/education/elementary-and-secondary-educ 

ation-act-of-1965/ 

Seidling, M. B. (2015). Evaluator certification and credentialing revisited: A survey of American Evaluation Association members in the United States. In J. W. Altschuld & M. Engle (Eds.), Accreditation, certification, and credentialing: Relevant concerns for U.S. evaluators. New Directions for Evaluation,145, 87–102 

Skolits, G. J., Morrow, J. A., & Burr, E. M. (2009). Reconceptualizing evaluator roles. American Journal of Evaluation, 30(3), 275-295. 

Standerfer, L. (2006). Before NCLB: The history of ESEA. Principal Leadership, 6(8), 26-27. 

Trevisan, M. S. (2004). Practical training in evaluation: A review of the literature. American Journal of Evaluation, 25(2), 255-272. 

Warner, L. H. (2020). Developing interpersonal skills of evaluators: A service-learning approach. American Journal of Evaluation, 41(3), 432-451. 

 

Filed Under: Evaluation Methodology Blog, News

  • « Previous Page
  • 1
  • …
  • 3
  • 4
  • 5
  • 6
  • 7
  • …
  • 14
  • Next Page »

Educational Leadership and Policy Studies

325 Bailey Education Complex
Knoxville, Tennessee 37996

Phone: 865-974-2214
Fax: 865.974.6146

The University of Tennessee, Knoxville
Knoxville, Tennessee 37996
865-974-1000

The flagship campus of the University of Tennessee System and partner in the Tennessee Transfer Pathway.

ADA Privacy Safety Title IX