• Request Info
  • Visit
  • Apply
  • Give
  • Request Info
  • Visit
  • Apply
  • Give

Search

  • A-Z Index
  • Map

Educational Leadership and Policy Studies

  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Programs
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Programs
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
Home » Archives for January 2024

Dueñas Highlighted as a 2024 Emerging Scholar by Diverse Issues in Higher Education

Archives for January 2024

Dueñas Highlighted as a 2024 Emerging Scholar by Diverse Issues in Higher Education

Dueñas Highlighted as a 2024 Emerging Scholar by Diverse Issues in Higher Education

January 31, 2024 by Jonah Hall

Dueñas Highlighted as a 2024 Emerging Scholar by Diverse Issues in Higher Education

Courtesy of the College of Education, Health, and Human Sciences

Mary Dueñas is passionate about student success, especially among underrepresented and marginalized student populations. Because of her passion for students to thrive in a higher education environment, she dedicates a large portion of her scholarship research to examine equity and access issues in higher education.

Mary Dueñas

Mary Dueñas

Her work hasn’t gone unnoticed. Just recently, Diverse Issues in Higher Education named Dueñas “An Equity and Access Champion” in their January 18th, 2024, issue and has named her a Top 15 Emergent Scholar. The publication highlights emerging scholars making an impact on education on college campuses nationwide.

“Receiving this national recognition is wonderful, and I’m honored to share this platform with other outstanding scholars from different disciplines,” said Dueñas.

Dueñas is an assistant professor in the department of Educational Leadership and Policy Studies (ELPS) at the University of Tennessee, Knoxville, College of Education, Health, and Human Sciences (CEHHS). In addition, she serves as program coordinator for the master’s student personnel program in College Student Personnel (CSP).

Using both quantative and qualitative research methods, Dueñas focuses on Latina/o/x/e  college students’ sense of belonging and their experience with imposter syndrome. She uses holistic frameworks and critical theory to share stories and explain systemic inequities that marginalized communities face in higher education.

“My research examines the ways in which larger social processes affect students and their overall well-being while also addressing underrepresented and marginalized students in relation to retention and success,” said Dueñas.

Cristobal Salinas, Jr., an associate professor of educational leadership and research methodology at Florida Atlantic University, nominated her for this prestigious national recognition. In his nomination letter, Salinas commended Dueñas for her commitment to scholarship that pushes the boundaries of higher education through novel perspectives and an innovative approach to research.

“This commitment to pioneering scholarship has been complemented by her unwavering dedication to teaching and mentoring the next generation of scholars, which is an integral part of her academic mission, explains Salinas.

Despite having a full plate at CEHHS, Dueñas has authored several peer-reviewed journal articles, been a guest on a podcast, and has several works she is authoring or co-authoring under review. One is “Síndrome del impostor: The Impact of the COVID-19 Pandemic on Latinx College Students’ Experiences with Imposter Syndrome.” She is co-authoring “Culturally Responsive Mentoring: A Psychosociocultural Perspective on Sustaining Students of Color Career Aspirations in STEM”.

Dueñas takes a glass-half-full approach to her work, focusing on the whole student. In other words, she says it’s about the positives that make a student’s experience successful and asking questions about what works.

“There is a changing landscape in how we think about higher education,” Dueñas says. “It’s not so much about the students adapting to higher education, it’s more about how higher education institutions supporting and serving students.”

Filed Under: News

Supporting STEM Teachers with Actionable Content-Based Feedback

Supporting STEM Teachers with Actionable Content-Based Feedback

January 25, 2024 by Jonah Hall

Supporting STEM Teachers with Actionable Content-Based Feedback

By Dr. Mary Lynne Derrington & Dr. Alyson Lavigne 

Please Note: This is Part 2 of a four-part series on actionable feedback. Stay tuned for the next posts that will focus on Leadership Content Knowledge (LCK) and teacher feedback in the areas of STEM, Literacy, and Early Childhood Education.

Missed the beginning of the series? Click here to read Part 1
on making teacher feedback count!

For school leaders, providing teachers with feedback in unfamiliar subject areas can be a challenge. At the same time, we know that teachers highly value feedback on their teaching content area as well as general pedagogical practices. When school leaders deepen their understanding of different subjects, it can prove a powerful lever to giving teachers the feedback they deserve and desire. Today, we’ll discuss ways to support teachers in the STEM (Science, Technology, Engineering and Math) area.

Imagine you are scheduled to observe a STEM lesson, an area where you might not feel confident. What might be some ways to prepare for this observation? Sarah Quebec Fuentes, Jo Beth Jimerson, and Mark Bloom recommend post-holing. In the context of building, this refers to digging holes deep enough to anchor fenceposts. As it pertains to your work, post-holing means engaging in an in-depth, but targeted exploration of the content area.

Another strategy is joining a STEM instructional coach or specialist for an observation and debrief. A third way to learn is to attend a STEM-focused professional development for teachers. These activities can help you think more deeply about the content and how it is taught.

In addition, you can identify subject-specific best practices to integrate into a pre-observation or post-observation conversation. This might look like adapting a subset of evaluation questions to specifically reflect STEM objectives. For example:

  1. Poses scenarios or identifies a problem that students can investigate (Bybee, et al., 2006).
  2. Fosters “an academically safe classroom [that] honors the individual as a mathematician and welcomes him or her into the social ecosystem of math” (Krall, 2018).
  3. Avoids imprecise language and overgeneralized tips or tricks (e.g., carry, borrow, FOIL) and instead use precise mathematical language grounded in conceptual mathematical understanding (e.g., trade, regroup, distributive property) (Karp et al., 2014, 2015).
  4. Uses models to communicate complex scientific concepts, emphasizing that models are only approximations of the actual phenomena and are limited simplifications used to explain them (Krajcik & Merritt, 2013).

Let’s imagine a meaningful mathematical talk emerges as an important practice from your post-holing in mathematics. In a pre-observation you might ask the teacher about their plans for creating meaningful mathematical talk in the lesson. During the observation, you can note if those questions appeared and/or when moments of meaningful mathematical talk were taking place. In a post-observation, you might ask teachers to reflect upon the moments they felt meaningful mathematical talk was occurring, and what inputs yielded those outcomes.

This blog entry is part of a four-part series on actionable feedback. Stay tuned for our next two posts that will focus on Leadership Content Knowledge (LCK) on concrete ways to provide feedback to teachers in the areas of, and Early Childhood Education.

If this blog has sparked your interest and you want to learn more, check out our book, Actionable Feedback to PK-12 Teachers. And for other suggestions on supervising teachers in STEM discipline areas with specific pre-observation and post-observation prompts and key practices for observation, see Chapter 8 by Sarah Quebec Fuentes, Jo Beth Jimerson, and Mark A. Bloom.

Filed Under: News

Making the Most of Your Survey Items: Item Analysis

Making the Most of Your Survey Items: Item Analysis

January 15, 2024 by Jonah Hall

Making the Most of Your Survey Items: Item Analysis

By Louis Rocconi, Ph.D. 

Hi, blog world! My name is Louis Rocconi, and I am an Associate Professor and Program Coordinator in the Evaluation, Statistics, and Methodology program at The University of Tennessee, and I am MAD about item analysis. In this blog post, I want to discuss an often overlooked tool to examine and improve survey items: Item Analysis.

What is Item Analysis?

Item analysis is a set of techniques used to evaluate the quality and usefulness of test or survey items. While item analysis techniques are frequently used in test construction, these techniques are helpful when designing surveys as well. Item analysis focuses on individual items rather than the entire set of items (such as Cronbach’s alpha). Item analysis techniques can be used to identify how individuals respond to items and how well items discriminate between those with high and low scores. Item analysis can be used during pilot testing to help choose the best items for inclusion in the final set. While there are many methods for conducting item analysis, this post will focus on two methods: item difficulty/endorsability and item discrimination.

Item Difficulty/Endorsability

Item difficulty, or item endorsability, is simply the mean, or average, response (Meyer, 2014). For test items that have a “correct” response, we use the term item difficulty, which refers to the proportion of individuals who answered the item correctly. However, when using surveys with Likert-type response options (e.g., strongly disagree, disagree, agree, strongly agree), where there is no “correct” answer, we can think of the item mean as item endorsability or the extent to which the highest response option is endorsed. We often divide the mean, or average response, by the maximum possible response to put endorsability on the same scale as difficulty (i.e., ranging from 0 to 1).

A high difficulty (i.e., close to 1) indicates an item that is too easy, while a low difficulty value (i.e., close to 0) suggests an overly difficult item or an item that few respondents endorse. Typically, we are looking for difficulty values between 0.3 and 0.7. Allen and Yen (1979) argue this range maximizes the information a test provides about differences among respondents. While Allen and Yen were referring to test items, surveys with Likert-type response options generally follow the same recommendations. An item with a low endorsability indicates that people are having a difficult time endorsing the item or selecting higher response options such as strongly agree. Whereas, an item with a high endorsability indicates that the item is easy to endorse. Very high or very low values for difficulty/endorsability may indicate that we need to review the item. Examining proportions for each response option is also useful. It demonstrates how frequently a response category was used. If a response category is not used or only selected by a few respondents, this may indicate that the item is ambiguous or confusing.

Item Discrimination

Item discrimination is a measure of the relationship between scores on an item and the overall score on the construct the survey is measuring (Meyer, 2014). It measures the degree to which an item differentiates individuals who score high on the survey from those who score low on the survey. It aids in determining whether an item is positively or negatively correlated with the total performance. We can think of item discrimination as how well an item is tapping into the latent construct. Discrimination is typically measured using an item-total correlation to assess the relationship between an item and the overall score. Pearson’s correlation and its variants (i.e., point-biserial correlation) are the most common, but other types of correlations such as biserial and polychoric correlations can be used.

Meyer (2014) suggests selecting items with positive discrimination values between 0.3 and 0.7 and items that have large variances. When the item-total correlation exceeds 0.7, it suggests the item may be redundant. A content analysis or expert review panel could be used to help decide which items to keep. A negative discrimination for an item suggests that the item is negatively related with the total score. This may suggest a data entry error, a poorly written item, or that the item needs to be reverse coded. Whatever the case, negative discrimination is a flag to let you know to inspect that item. Items with low discrimination tap into the construct poorly and should be revised or eliminated. Very easy or very difficult items can also cause low discrimination, so it is good to check whether that is a reason as well. Examining discrimination coefficients for each response option is also helpful. We typically want to see a pattern where lower response options (e.g., strongly disagree, disagree) have negative discrimination coefficients and higher response options (e.g., agree, strongly agree) have positive correlations and the magnitude of the correlations is highest at the ends of the response scale (we would look for the opposite pattern if the item is negatively worded).

Conclusion

Item difficulty/endorsability and item discrimination are two easy techniques researcher can use to help improve the quality of their survey items. These techniques can easily be implemented when examining other statistics such as internal consistency reliability.

___________________________________________________________________

References

Allen, M. & Yen, W. (1979). Introduction to measurement theory. Wadsworth.

Meyer, J. P. (2014). Applied measurement with jMetrik. Routledge.

Resources

I have created some R code and output to demonstrate how to implement and interpret an item analysis.

The Standards for Educational and Psychological Testing

Filed Under: Evaluation Methodology Blog

Education, Leadership, and Policy Studies Researcher Recognized by Education Week

Education, Leadership, and Policy Studies Researcher Recognized by Education Week

January 4, 2024 by Jonah Hall

Education, Leadership, and Policy Studies Researcher Recognized by Education Week

Courtesy of the College of Education, Health, and Human Sciences (January 4, 2024)

Rachel White’s Superintendent Research is a Top-10 Education Study for 2023

2023 has been quite the year for Rachel White, an assistant professor in the department of Educational Policy and Leadership Studies She’s been nationally recognized for her early-career work in the field of educational leadership with the Jack A. Culbertson Award from the University Council For Educational Administration. She’s also been selected to serve on a United States Department of Education Regional Advisory Committee to provide advice and recommendations concerning the educational needs in the Appalachian region and how those needs can be most effectively addressed. However, her research into superintendent attrition and gender gaps has put her in the national spotlight.

Rachel White sits on a wooden credenza in front of a dark blue wall. She has fair skin, long blonde hair, and is wearing a black blouse, black pants, and black high-heeled shoes.

Rachel White

Recently, Education Week named White’s study on attrition and gender gaps among K-12 district superintendents as a Top-10 Educational Study of 2023. First published in the journal Educational Researcher, one way that White’s research demonstrates the magnitude of the gender gap is through superintendent first names. She finds that one out of every five superintendents in the United States is named Michael, David, James, Jeff, John, Robert, Steven, Chris, Brian, Scott, Mark, Kevin, Jason, Matthew, or Daniel. In fact, Education Week and Ed Surge brought the story to national attention with the articles “There’s a Good Chance Your Superintendent Has One of These 15 Names” and “What Are the Odd’s Your Superintendent is Named Michael, John, or David?”

In order to diversify the superintendency, women superintendents must be hired to replace outgoing men. However, drawing on the most recent data update of her National Longitudinal Superintendent Database, White recently published a data brief showing that over the last five years, 50% of the time a man turned over, he was replaced by another man, and a woman replaced a woman 10% of the time. A man replaced a woman 18% of the time, and a woman replaced a man 22% of the time.

When thinking about the importance of this research, White shared “Nearly ten years ago, the New York Times reported a similar trend among large companies: more S&P 1500 firms were being run by men named John than women, in total. The emulation of this trend in the K12 education sector, in 2024, is alarming. Public schools are often touted as “laboratories of democracy”: places where young people learn civic engagement and leadership skills to participate in a democratic society. Yet, what young people see in K12 public schools is that leadership positions—the highest positions of power in local K-12 education institutions—are primarily reserved for men.”

One thing is for certain, we have a way to go when it comes to balanced gender representation in school district leadership. White’s research has shown that, while over 75 percent of teachers and 56 percent of principals are women, the pace at which the superintendent gender gap is closing feels glacial: the current 5-year national average for gender gap closure rate is 1.4 percentage points per year. At this rate, the estimated year of national gender equality in the superintendency is 2039.

“Superintendents are among the most visible public figures in a community, interfacing with students, educators, families, business, and local government officials on a daily basis,” White shared. “A lack of diversity in these leadership positions can convey that a district is unwelcoming of diverse leaders that bring valuable insights and perspectives to education policy and leadership work.”

White continued, “Not only do we need to recruit and hire diverse leaders to the superintendency, but school boards and communities need to be committed to respecting, valuing, and supporting diverse district superintendents. New analyses of the updated NLSD show that women’s’ attrition rates spiked from 16.8% to 18.2% over the past year, while men’s remained stable around 17% for the past three years. We need to really reflect and empirically examine why this pattern has emerged, and what school boards, communities, and organizations and universities preparing and supporting women leaders can do to change this trajectory.”

 White has doubled down on her commitment to establishing rigorous and robust research on superintendents with the launch of The Superintendent Lab—a hub for data and research on school district superintendency. In fact, The Superintendent Lab is home to The National Longitudinal Superintendent Database, with data on over 12,500 superintendents across the United States, updated annually. With the 2023-24 database update completed, the NLSD now houses over 65,000 superintendent-year data points. The database allows the lab team to learn more about issues related to superintendent labor markets over time, and even produce interactive data visualizations for the public to better understand trends in superintendent gender gaps and attrition.

Along with a team of 10 research assistants and lab affiliates, White hopes to foster a collaborative dialog among policy leaders which may lead to identifying ways to create a more inclusive and equitable K-12 school systems.

“A comprehensive understanding of the superintendency in every place and space in the United States has really never been prioritized or pursued. My hope is that, through The Superintendent Lab, and the development of rigorous and robust datasets and research, I can elevate data-driven dialogue to advance policies and practices that contribute more equitable and inclusive spaces in education. And, along the way, I am passionate about the Lab being a space for students from all levels to engage in meaningful research experiences – potentially igniting a spark in others to use their voice and pursue opportunities that will contribute to great equity and inclusion in K12 education leadership,” said White.

Filed Under: News

Kelchen Once Again Named Top Scholar Influencer

Kelchen Once Again Named Top Scholar Influencer

January 4, 2024 by Jonah Hall

Kelchen Once Again Named Top Scholar Influencer

Courtesy of the College of Education, Health, and Human Sciences (January 4, 2024)

We’ve all heard the term, “influencer.” Many of us associate an influencer as someone with a large following on social media, such as Instagram or YouTube, who set trends or promotes products. But did you know that there are a select group of scholar influencers who help shape educational practice and policy?

Robert Kelchen stands in front of the

Robert Kelchen

One of those scholar influencers is Robert Kelchen, who serves as department head of Educational Leadership and Policy Studies (ELPS) at the University of Tennessee, Knoxville, College of Education, Health, and Human Sciences (CEEHHS).  Kelchen is ranked 41 out of 20,000 scholars nationwide in Education Week’s Edu-Scholar Public Influence Rankings for 2024. In fact, Kelchen is the only scholar from the University of Tennessee, Knoxville, to make the list.

“As a faculty member at a land-grant university, it is my job to help share knowledge well beyond the classroom or traditional academic journals,” said Kelchen. I am thrilled to have the opportunity to work with policymakers, journalists, and college leaders on a regular basis to help improve higher education.”

For 14 years, Education Week selects the top-200 scholars (out of an eligible pool of 20,000) from across the United States as having the most influence on issues and policy in education. The list is compiled by opinion columnist Rick Hess, resident scholar at the American Enterprise Institute and director of Education Policy Studies.

The selection process  includes a 38-member Selection Committee made up of university scholars representing public and private institutions from across the United States. The Selection Committee calculates scores including, Google Scholar Scores, Book Points, Amazon Rankings, Congressional Record mentions, media, and web appearances and then ranks the scholar accordingly.  Kelchen is considered a “go-to” source for reporters covering issues in higher education, with over 200 media interviews, year after year. If there is a story about higher education in the media, you’ll more than likely find a quote from Kelchen as an expert source.

“In the last year, I have had the pleasure of supporting several states on their higher education funding models, presenting to groups of legislators, and being a resource to reporters diving into complex higher education finance topics. These engagements help strengthen my own research and give me the opportunity to teach cutting-edge classes to ELPS students,” said Kelchen.

In addition, Kelchen received national recognition by the Association for the Study of Higher Education (ASHE) for his research on higher education finance, accountability policies and practices, and student financial aid. ASHE’s Council on Public Policy in Higher Education selected Kelchen for its Excellence in Public Policy Higher Education Award.

Through its eight departments and 12 centers, the UT Knoxville College of Education, Health, and Human Sciences enhances the quality of life for all through research, outreach, and practice. Find out more at cehhs.utk.edu

Filed Under: News

Are Evaluation PhD Programs Offering Training in Qualitative and Mixed Design Methodologies

Are Evaluation PhD Programs Offering Training in Qualitative and Mixed Design Methodologies

January 1, 2024 by Jonah Hall

Are Evaluation PhD Programs Offering Training in Qualitative and Mixed Design Methodologies

By Kiley Compton

Hello! My name is Kiley Compton and I am a fourth-year doctoral student in UT’s Evaluation, Statistics, and Methodology (ESM) program. My research interests include program evaluation, research administration, and sponsored research metrics.  

One of the research projects I worked on as part of the ESM program examined curriculum requirements in educational evaluation, assessment, and research (EAR) doctoral programs.  Our team was comprised of first- and second-year ESM doctoral students with diverse backgrounds, research interests, and skill sets.  

An overwhelming amount of preliminary data forced us to reconsider the scope of the project. The broad focus of the study was not manageable, so we narrowed the scope and focused on the prevalence of mixed method and qualitative research methodology courses offered in U.S. PhD programs.  Experts in the field of evaluation encourage the use of qualitative and mixed method approaches to gain an in-depth understanding of the program, process, or policy being evaluated (Bamberger, 2015; Patton, 2014).  The American Evaluation Association developed a series of competencies to inform evaluation education and training standards, which includes competency in “quantitative, qualitative, and mixed designs” methodologies (AEA, 2018). Similarly, Skolits et al. (2009) advocate for professional training content that reflects the complexity of evaluations.  

This study was guided by the following research question: what is the prevalence of qualitative and mixed methods courses in Educational Assessment, Evaluation, and Research PhD programs? Sub-questions include 1) to what extent are the courses required, elective, or optional? and 2) to what extent are these courses offered at more advanced levels? For the purpose of this study, elective courses are those that fulfill a specific, focused requirement, while optional courses are those that are offered but do not fulfill elective requirements.  

Methods 

This study focused on PhD programs similar to UT’s ESM program. PhD programs from public and private institutions were selected based on the U.S. Department of Education’s National Center for Education Statistics (NCES) Classification of Instructional Programs (CIP) assignment. Programs under the 13.06 “Educational Assessment, Evaluation, and Research” CIP umbrella were included.  We initially identified a total of 50 programs. 

Our team collected and reviewed available program- and course-level data from program websites, handbooks, and catalogs, and assessed which elements were necessary to answer the research questions. We created a comprehensive data code book based on agreed upon definitions and met regularly throughout the data collection process to assess progress, discuss ambiguous data, and refine definitions as needed. More than 14 program-level data points were collected, including program overview, total credit hours required, and number of dissertation hours required. Additionally, available course data were collected, including course number, name, type, level, requirement level, description, and credit hours. While 50 programs were identified, only 36 of the 50 programs were included in the final analysis due to unavailable or incomplete data. After collecting detailed information for the 36 programs, course-level information was coded based on the variables of interest: course type, course level, and requirement level.  

Results 

​​​Prevalence of qualitative & mixed methods courses 

The team analyzed data from 1,134 courses representing 36 programs, both in aggregate and within individual programs. Results show that only 14% (n=162) of the courses offered or required to graduate were identified as primarily qualitative and only 1% (n=17) of these courses were identified as mixed methods research (MMR). Further, only 6% (n=70) of these courses were identified as evaluation courses (Table 1). Out of 36 programs, three programs offered no qualitative courses. Qualitative courses made up somewhere between 1% and 20% of course offerings for 28 programs. Only five of the programs reviewed exceeded 20%. Only 12 programs offered any mixed methods courses and MMR courses made up less than 10% of the course offerings in each of those programs. 

Table 1. 

Aggregate Course Data by Type and Representation


Course Type                                        n (%)                            Program Count


Quantitative Methods                         409 (36%)                        36 (100%)

Other                                                  317 (28%)                        36 (100%)

Qualitative Methods                           162 (14%)                        33 (92%)

Research Methods                             159 (14%)                       36 (100%)

Program Evaluation                            70 (6%)                           36 (100%)

Mixed Methods                                    17 (1%)                          12 (33%)


Total                                                    1,134 (100%)                         –

 

Requirement level of qualitative and mixed method courses 

Out of 162 qualitative courses, 41% (n=66) were listed as required, 43% (n=69) were listed as elective, and 16% (n=26) were listed as optional (figure 2). Out of 17 mixed methods research courses, 65% (n=11) were listed as required and 35% (n=6) were listed as elective.  

Course level of qualitative and mixed-method courses 

Out of 162 qualitative courses, 73% (n=118) were offered at an advanced level and 27% (n=73) were offered at an introductory level. Out of 17 mixed methods research courses, 71% (n=12) were offered at an advanced level and 29% (n=5) were offered at an introductory level. 

Discussion 

Findings from the study provide valuable insight into the landscape of doctoral curriculum in Educational Assessment, Evaluation, and Research programs. Both qualitative and mixed methods courses were underrepresented in the programs analyzed. However, the majority of course offerings were required and classified as advanced.​​​​ Given that various methodologies are needed to conduct rigorous evaluations, it is our hope that these findings will encourage doctoral training programs to include more courses on mixed and qualitative methods, and that they will encourage seasoned and novice evaluators to seek out training on these methodologies.  

This study highlights opportunities for collaborative work in the ESM program and ESM faculty’s commitment to fostering professional development.  The project began as a project for a research seminar. ESM faculty mentored us through proposal development, data collection and analysis, and dissemination. They also encouraged us to share our findings at conferences and in journals and helped us through the process of drafting and submitting abstracts and manuscripts. Faculty worked closely with our team through every step of the process, serving as both expert consultants and supportive colleagues.  

The study also highlights how messy data can get. Our team even affectionately nicknamed the project “​​messy MESA,” owing to challenges, including changes to the scope, missing data, and changes to the team as students left and joined, along with the common acronym for measurement, evaluation, statistics, and assessment (MESA). While I hope that the product of our study will contribute to the fields of evaluation, assessments, and applied research, the process has made me a better researcher.  

References 

American Evaluation Association. (2018.). AEA evaluator competencies. https://www.eval.org/About/Competencies-Standards/AEA-Evaluator-Competencies  

Bamberger, M. (2015). Innovations in the use of mixed methods in real-world evaluation. Journal of Development Effectiveness, 7(3), 317–326. https://doi.org/10.1080/19439342.2015.1068832 

Capraro, R. M., & Thompson, B. (2008). The educational researcher defined: What will future researchers be trained to do? The Journal of Educational Research, 101, 247-253. doi:10.3200/JOER.101.4.247-253 

Dillman, L. (2013). Evaluator skill acquisition: Linking educational experiences to competencies. The American Journal of Evaluation, 34(2), 270–285. https://doi.org/10.1177/1098214012464512 

Engle, M., Altschuld, J. W., & Kim, Y. C. (2006). 2002 Survey of evaluation preparation programs in universities: An update of the 1992 American Evaluation Association–sponsored study. American Journal of Evaluation, 27(3), 353-359.  

LaVelle, J. M. (2020). Educating evaluators 1976–2017: An expanded analysis of university-based evaluation education programs. American Journal of Evaluation, 41(4), 494-509. 

LaVelle, J. M., & Donaldson, S. I. (2015). The state of preparing evaluators. In J. W. Altschuld & M.Engle (Eds.), Accreditation, certification, and credentialing: Relevant concerns for U.S. evaluators. New Directions for Evaluation,145, 39–52. 

Leech, N. L., & Goodwin, L. D. (2008). Building a methodological foundation: Doctoral-Level methods courses in colleges of education. Research in the Schools, 15(1). 

Leech, N. L., & Haug, C. A. (2015). Investigating graduate level research and statistics courses in schools of education. International Journal of Doctoral Studies, 10, 93-110. Retrieved from http://ijds.org/Volume10/IJDSv10p093-110Leech0658.pdf 

Levine, A. (2007). Educating researchers. Washington, DC: The Education Schools Project. 

Mathison, S. (2008). What is the difference between evaluation and research—and why do we care. Fundamental Issues in Evaluation, 183-196. 

McAdaragh, M. O., & LaVelle, J. M., & Zhang, L. (2020). Evaluation and supporting inquiry  

courses in MSW programs. Research on Social Work Practice, 30(7), 750-759.  

doi:10.1177/1049731520921243 

McEwan, H., & Slaughter, H. (2004). A brief history of the college of education’s doctoral  

degrees. Educational Perspectives, 2(37), 3-9. Retrieved from  

https://files.eric.ed.gov/fulltext/EJ877606.pdf 

National Center for Education Statistics. (2020). The Classification of Instructional Programs [Data set]. https://nces.ed.gov/ipeds/cipcode/default.aspx?y=56.  

Page, R. N. (2001). Reshaping graduate preparation in educational research methods: One school’s experience. Educational Researcher, 30(5), 19-25. 

Patton, M.Q. (2014). Qualitative evaluation and research methods (4th ed.). Sage Publications. 

Paul, C. A. (n.d.). Elementary and Secondary Education Act of 1965. Social Welfare History  

Project. Retrieved from  

https://socialwelfare.library.vcu.edu/programs/education/elementary-and-secondary-educ 

ation-act-of-1965/ 

Seidling, M. B. (2015). Evaluator certification and credentialing revisited: A survey of American Evaluation Association members in the United States. In J. W. Altschuld & M. Engle (Eds.), Accreditation, certification, and credentialing: Relevant concerns for U.S. evaluators. New Directions for Evaluation,145, 87–102 

Skolits, G. J., Morrow, J. A., & Burr, E. M. (2009). Reconceptualizing evaluator roles. American Journal of Evaluation, 30(3), 275-295. 

Standerfer, L. (2006). Before NCLB: The history of ESEA. Principal Leadership, 6(8), 26-27. 

Trevisan, M. S. (2004). Practical training in evaluation: A review of the literature. American Journal of Evaluation, 25(2), 255-272. 

Warner, L. H. (2020). Developing interpersonal skills of evaluators: A service-learning approach. American Journal of Evaluation, 41(3), 432-451. 

 

Filed Under: Evaluation Methodology Blog, News

Educational Leadership and Policy Studies

325 Bailey Education Complex
Knoxville, Tennessee 37996

Phone: 865-974-2214
Fax: 865.974.6146

The University of Tennessee, Knoxville
Knoxville, Tennessee 37996
865-974-1000

The flagship campus of the University of Tennessee System and partner in the Tennessee Transfer Pathway.

ADA Privacy Safety Title IX