• Request Info
  • Visit
  • Apply
  • Give
  • Request Info
  • Visit
  • Apply
  • Give

Search

  • A-Z Index
  • Map

Educational Leadership and Policy Studies

  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Programs
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Programs
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
Home » Archives for May 2024

Leadership Studies Program Holds 2024 Awards Ceremony Senior Toast

Archives for May 2024

Leadership Studies Program Holds 2024 Awards Ceremony Senior Toast

Leadership Studies Program Holds 2024 Awards Ceremony Senior Toast

May 17, 2024 by Jonah Hall

Leadership Studies Program Holds 2024 Awards Ceremony Senior Toast

The Leadership Studies program held its Senior Toast and Awards Ceremony last night where we celebrated our forty-four (44) 2023-24 graduates. Annually, our graduates lead a Capstone project as their culminating experience in the minor, with the most exceptional being awarded a medal. We selected Tyler Johnson’s project “Addressing the Mental Health of IFC” and Amara Pappas’ “Musical Theatre Rehearsal Project and Major” as this cohort’s Self-Directed and Faculty-Initiated Capstones of the Year. Elle Caldow’s, Kyle Stork’s Margaret Priest’s, Devon Thompson’s, Jane Carson Wade’s exceptional Capstones also earned Honorable Mentions. Erin McKee earned her Leadership Studies Engaged Community Scholar Medal and Grace Woodside the Zanoni Award for contribution to the Leadership Studies Academic Community. We also recognized Dr. Sean Basso as our Faculty Member of the Year and ELPS’ own Diamond Leonard as our Staff Member of the Year.

The highlight of the evening is the induction of graduates, faculty, and staff to the Tri-Star Society. The 2024 Class of the Tri-Star Society is: Brody Carmack, Mackenzie Galloway, Tyler Johnson, Erin Mckee, Alay Mistry, McKaylee Mix, Amara Pappas, Devon Thompson, Mikele Vickers, Kendall William, and Grace Woodside. These leaders truly distinguished themselves as Leaders of Leaders with exceptional potential for continued leadership within our state, as demonstrated by their time at the University and in our community. Undergraduate Leadership Studies celebrates each of our graduates, all they have and will accomplish, and those in the UT community that contributed to their success.

Filed Under: News

Supporting Early Childhood Teacher Growth and Development with Actionable Context-Based Feedback

Supporting Early Childhood Teacher Growth and Development with Actionable Context-Based Feedback

May 17, 2024 by Jonah Hall

Supporting Early Childhood Teacher Growth and Development with Actionable Context-Based Feedback

By Dr. Mary Lynne Derrington & Dr. Alyson Lavigne

Please Note: This is the final part of a four-part series on actionable feedback. Stay tuned for the next posts that will focus on Leadership Content Knowledge (LCK) and teacher feedback in the areas of STEM, Literacy, and Early Childhood Education.

Missed the beginning of the series? Click here to read Part 1
on making teacher feedback count!

According to the Center on the Developing Child at Harvard University, in the first few years of life more than 1 million new neural connections form every second. With children experiencing rapid brain development from birth to age 8, early childhood education and the experiences in those settings are critical for building a foundation of lifelong learning and success. Thus, supporting the educators who teach these early learners is perhaps one of the best educational investments that any school leader can make. 

One way leaders support teachers is to observe classrooms and provide feedback. Maria Boeke Mongillo and Kristine Reed Woleck argue that those who observe and provide feedback to early childhood educators can leverage leadership content knowledge—knowledge about the principles of early childhood education—and apply that knowledge to the observation cycle and the context in which early childhood educators work.

To apply leadership content knowledge, school leaders should first be familiar with the National Association for the Education of Young Children (NAEYC) and their principles of child development and learning, most recently published in 2020. The list of nine principles includes the essential element of play for promoting joyful learning to foster self-regulation, language, cognitive and social competencies, and content knowledge across disciplines.

Once a school leader is familiar with NAEYC’s nine principles, they may consider applying early childhood-informed “look-fors” during an observation with related questions that can be used during a pre- or post-observation conference. Examples by area of teaching are provided below.

Learning Environment

Look-Fors

  • Flexible seating and work spaces allow for collaboration and social skills and language development
  • Physical space and furniture allow for movement and motor breaks

Pre- or Post-Observation Questions

  • How do your classroom environment and materials support your student learning outcomes?
Instructional Practices

Look-Fors

  • Opportunities for learning and embedded in play and collaborative experiences

Pre- or Post-Observation Questions

  • How might your students consolidate and extend their learning in this lesson through play opportunities?
Assessment

Look-Fors

  • Use of observations and interviews to assess student learning.

Pre- or Post-Observation Questions

  • What methods are you using to collect information about student learning during this lesson?

These systematic structures can be applied to make sure that observation and importantly, feedback to early childhood educators is meaningful and relevant. 

This blog entry is the last entry as part of a four-part series on actionable feedback. 

If this blog has sparked your interest and you want to learn more, check out our book, Actionable Feedback to PK-12 Teachers. And for other suggestions on supervising teachers in early childhood, see Chapter 10 by Maria Boeke Mongillo and Kristine Reed Woleck.

Missed the beginning of the series? Click here to read the first, second and third blog posts. 

Filed Under: News

Reflecting on a Decade After ESM: My Continuing Journey as an Evaluation Practitioner and Scholar

Reflecting on a Decade After ESM: My Continuing Journey as an Evaluation Practitioner and Scholar

May 15, 2024 by Jonah Hall

Reflecting on a Decade After ESM: My Continuing Journey as an Evaluation Practitioner and Scholar

By Tiffany Tovey, Ph.D.

Greetings, fellow explorers of evaluation! I’m Tiffany Tovey, a fellow nerd​,​​ ​UTK alum​,​ and practitioner on a constantly evolving professional and personal journey, navigating the waters with a compass called reflective practice. Today, I’m thrilled to reflect together with you on the twists and turns of my journey as an evaluation practitioner and scholar in the decade since I defended my dissertation and offer some insights for you to consider in your own work. 

My Journey in Evaluation 

Beginning the unlearning process. The seeds of my journey into social science research were sown during my undergraduate years as a first-generation college student at UTK, where I pursued both philosophy and psychology for my bachelor’s degree. While learning about the great philosophical debates and thinkers, I was traditionally trained in experimental and social psychology under the mentorship of Dr. Michael Olson. This rigorous foundation of exploring knowledge and inquiry provided me with a foundational perspective on what was to come. I learned the importance of asking questions, embracing fallibilism, and appreciating the depth of what I now call reflective practice. Little did I know, this foundation really set the stage for my immersion into the world of evaluation, starting with the Evaluation, Statistics, and Methodology (ESM) program at UTK.  

Upon entering ESM for my Ph.D., I found myself in the messy, complex, and dynamic realm of applying theory to practice. Here, my classical training in positivist, certainty-oriented assumptions was immediately challenged (in ways I am still unlearning to this day), and my interests in human behavior and reflective inquiry found a new, more nuanced, context-oriented environment to thrive. Let me tell you about lessons I learned from three key people along the way: 

  • Communicating Data/Information: Statistics are tools for effectively communicating about and reflecting on what we know about​​ what is and (sometimes) why it is the way it is. Dr. Jennifer Ann Morrow played a pivotal role in shaping my understanding of statistics and its application in evaluation. Her emphasis on making complex statistical information accessible and meaningful to students, clients, and other audiences has stuck with me.  
     
    ​​As important as statistics are, so too are words—people’s lived experiences, which is why qualitative research is SO important in our work, something that all my instructors helped to instill in me in ESM. I can’t help it; I’m a word nerd. Whether qualitative or quantitative, demystifying concepts, constructs, and contexts, ​​outsmarting software and data analysis programs, and digesting and interpreting information in a way that our busy listeners can understand and make use of is a fundamental part of our jobs. 
  • Considering Politics a​​nd Evaluation Use: Under the mentorship of Dr. Gary Skolits, a retired ESM faculty member and current adjunct faculty member at UTK, I began to understand the intricate dances evaluators navigate in the realms of politics and the use of evaluation findings. His real-talk and guidance helped prepare me for the complexities of reflective practice in evaluation, which became the focus of my dissertation. Upon reflection, I see my dissertation work as a continuation of the reflective journey I began in my undergraduate studies, and my work with Gary as a fine-tuning and ​clarification of​ the critical role of self-awareness, collaboration, facilitation, and tact in the evaluation process. 
  • The Key Ingredient – Collaborative Reflective Practice: My journey was deepened by my engagement with Dr. John Peters, another now-retired faculty member from UTK’s College of Education Health and Human Sciences, who introduced me to the value of collaborative reflective practice through dialogue and systematic reflective processes. His teachings seeded my belief that evaluators should ​​facilitate reflective experiences for clients and collaborators, fostering deeper understandings, cocreated learning, and more meaningful outcomes (see the quote by John himself below… and think about the ongoing role of the evaluator during the lifecycle of a project). He illuminated the critical importance of connecting theory to practice through reflective practice—a transformative activity that occupies the liminal space between past actions and future possibilities. This approach encourages us to critically examine the complexities of practice, thereby directly challenging the uncritical acceptance of the status quo. 

My post-PhD journey. I currently serve as the director of the Office of Assessment, Evaluation, and Research Services and teach program evaluation, qualitative methods, reflective practice, interpersonal skills, and just-in-time applied research skills to graduate and undergraduate students at UNC Greensboro. Here, I apply my theoretical knowledge to real-world evaluation projects, managing graduate students and leading them on their professional evaluation learning journey. Each project and collaboration has been an opportunity to apply and refine my understanding of reflective practice, effective communication, and the transformative power of evaluation.  

​​​My role at UNCG has been a continued testament to the importance of reflective practice. The need for intentional reflective experiences runs throughout my role as a director of OAERS, lead evaluator and research on sponsored projects, mentorship and scaffolding with students, and as a teacher. Building in structured time to think, unpack questions and decisions together, and learn how to go on more wisely is a ubiquitous need. Making space for reflective practice means leveraging the ongoing learning and unlearning process that defines the contours of (1) evaluation practice, (2) evaluation scholarship, and (3) let’s be honest… life itself!  

Engaging with Others: The Heart of Evaluation Practice 

As evaluators, our work is inherently collaborative and human centered. We engage with diverse collaborators and audiences, each bringing their unique perspectives and experiences to the table. In this complex interplay of voices,​​ it’s essential that we—evaluators—foster authentic encounters that lead to meaningful insights and outcomes. 

In the spirit of Martin Buber’s philosophy, I try to approach my interactions with an open heart and mind, seeking to establish a genuine connection with those I work with. Buber reminds us that “in genuine dialogue, each of the participants really has in mind the other or others in their present and particular being and turns to them with the intention of establishing a living mutual relation between himself and them” (Buber, 1965, p. 22). This perspective is foundational to my practice, as it emphasizes the importance of mutual respect and understanding in creating a space for collaborative inquiry and growth. 

Furthermore, embracing a commitment to social justice is integral to my work as an evaluator. Paulo Freire’s insights resonate deeply with me:  

Dialogue cannot exist, however, in the absence of a profound love for the world and for people. The naming of the world, which is an act of creation and re-creation, is not possible if it is not infused with love. Love is at the same time the foundation of dialogue and dialogue itself. (Freire, 2000, p. 90) 

This principle guides me in approaching each evaluation project with a sense of empathy and a dedication to promoting equity and empowerment through my work. 

Advice for Emerging Evaluators 

  • Dive in and embrace the learning opportunities that come your way.  
  • Reflect on your experiences and be honest with yourself.  
  • Remember, evaluation is about people and contexts, not just techniques and tools.  
  • Leverage your unique personality and lived experience in your work. 
  • Never underestimate the power of effective, authentic communication… and networking. 
  • Most importantly, listen to and attend to others—we are a human-serving profession geared towards social betterment. Be in dialogue with your surroundings and those you are in collaboration with. View evaluation as a reflective practice, and your role as a facilitator of that process. Consider how you can leverage the perspectives of Buber and Freire in your own practice to foster authentic encounters and center social justice in your work. 

Conclusion and Invitation 

My journey as an evaluation scholar is a journey of continuous learning, reflection, and growth. As I look to the future, I see evaluation as a critical tool for navigating the complex challenges of our world, grounded in reflective practice and a commitment to the public good. To my fellow evaluators, both seasoned and emerging, let’s embrace the challenges and opportunities ahead with open minds and reflective hearts. And to the ESM family at UTK, know that I am just an email away (tlsmi32@uncg.edu), always eager to connect, share insights, and reflect further with ​​you. 

Filed Under: Evaluation Methodology Blog

How Do I Critically Consume Quantitative Research?

How Do I Critically Consume Quantitative Research?

May 1, 2024 by Jonah Hall

How Do I Critically Consume Quantitative Research?

By Austin Boyd 

Every measurement, evaluation, statistics, and assessment (MESA) professional, whether they ​are​ established educators and practitioners or aspiring students, engages with academic literature in some capacity. Sometimes for work, ​​other times for pleasure, but always in the pursuit of new knowledge. But how do we as consumers of research determine whether the quantitative research we engage with is high quality? 

My name is Austin Boyd, and I am a researcher, instructor, and ESM alumni. ​​I have read my fair share of articles over the past decade and was fortunate enough to publish a few of my own. I have read articles in the ​​natural, formal, applied, and social sciences, and while they all shared the title of peer-reviewed publication, there was definitely variability in the quality of quantitative research from one article to the next. Initially, it was difficult for me to even consider the idea that a peer-reviewed publication would be anything less than perfect. However, as I have grown as a critical consumer of research, I have devised six questions to keep in mind when reading articles with quantitative analyses that allow me to remain objective in the face of exciting results. 

  1. ​​     ​What is the purpose of the article?  

The first question to keep in mind when reading an article is​,​ “​W​hat is its purpose?” Articles may state these in the form of research questions or even in the title by using words such as “empirical”, “validation”, and “meta-analysis”. While the purpose of an article has no bearing on its quality, it does impact the type of information a reader should expect to obtain from it. Do the research questions indicate that the article will be presenting new exploratory research on a new phenomenon or attempting to validate previous research findings? Remaining aware of the article’s purpose allows you to determine if the information is relevant and in the scope of what it should be providing. 

  1. What information is provided about obtaining participants and about the participants themselves? 

The backbone of quantitative research is data. In order to have any data, participants or cases must be found and measured for the phenomena of interest. These participants are all unique, and it is this uniqueness that needs to be disclosed to the reader. Information on the population of interest, how the selected participants were recruited, who they are, and why their results were or were not included in the analysis is essential for understanding the c​​ontext of the research. Beyond the article itself, the demographics of the participants are also important for planning future research. While research participants are largely Western, educated, industrialized, rich, and democratic societies (​​WEIRD; Henrich et al., 2010), it should not be assumed that this is the case for all research. The author(s) of an article should disclose demographic information of the participants, so the readers understand the context of the data and the generalizability of the results, and so that researchers can accurately replicate or expand the research to ne​​w contexts. 

  1. Do the analyses used make sense for the data and proposed research question(s)? 

In order to obtain results from the quantitative data collected, some form of analysis must be conducted. The most basic methods of exploring quantitative data are called statistics (Sheard, 2018). T​​he selected statistical analysis should align with the variables presented in the article and answer the research question(s) guiding the project. There is a level of subjectivity as to which statistical analysis should be used to analyze data. Variables measured on a nominal scale should not be used as the outcome variable when conducting analyses that look at the differences between group means, such as ​​t-tests and ANOVAs, while ratio scale variables should not be used to conduct analyses dealing with frequency distributions, such as chi-square tests. However, there are analyses which require the same variable types, making them seemingly interchangeable. For example, t-tests, logistic regressions, and point biserial analyses all use two variables, one continuous and one binary. However, each of these analyses addresses different research questions such as “Is there a difference between groups?”, “Can we predict an outcome?”, and “Is there a relationship between variables?”. While there is a level of subjectivity as to which statistical analysis can be used to analyze data, there are objectively incorrect analyses based on both the overarching research questions and the scale of measurement of the available variables in the data.  

  1. What results are provided? 

While a seemingly straightforward question, there is a lot of information that can be provided about a given analysis. The most basic, and least informative, is a blanket statement about the statistical significance. Even if there is no statistically significant result to report, a blanket statement is not sufficient information about the analyses with all the different values that can be reported for each analysis. For example, a t-test has a t value, degrees of freedom, p value, confidence interval, power level, and effect size, all of which provide valuable information about the results. While having some of these values does allow the reader to calculate the missing ones, the onus should not be put on the reader to do so (Cohen, 1990). Additionally, depending on the type of statistical analysis chosen, additional tests of the data must be conducted to determine if the data meets the assumptions necessary for the analysis. The results of these tests of assumptions and the decisions made based on them should be reported and supported by the existing literature. 

  1. Is there any discussion of limitations? 

Almost every article has limitations in some form or other, which should be made known to the reader. If an article didn’t have any limitations, the author would make a point to state as much. Limitations include limits to the generalizability of the findings, confounding variables, or simply time constraints. While these might seem negative, they are not immediate reasons to discredit an article entirely. As was the case for the demographics, the limitations provide further context about the research. They can even be useful in providing direction for follow-up studies in the same way a future research section would.  

  1. Do you find yourself still having questions after finishing the article?  

The final question to keep in mind once you have finished reading an article is “Do you still have questions?” At the end of an article, you shouldn’t find yourself needing more information about the study. You might want to know more about the topic or similar research, but you shouldn’t be left wondering about pieces of the research design or other methodological aspects of the study. High-quality research deserves an equally high-quality article, which includes ample information about every aspect of the study. 

While not an exhaustive list, these six questions are designed to provide a starting point for determining if research with quantitative data is of high quality. Not all research is peer-reviewed, including conference presentations, blog posts, and white papers, and simply being peer-reviewed does not make a publication infallible. It is important to understand how to critically consume research in order to successfully navigate the ever-expanding body of scientific research. 

​​​​     ​Additional Resources: 

https://blogs.lse.ac.uk/impactofsocialsciences/2016/05/09/how-to-read-and-understand-a-scientific-paper-a-guide-for-non-scientists/  

https://statmodeling.stat.columbia.edu/2021/06/16/wow-just-wow-if-you-think-psychological-science-as-bad-in-the-2010-2015-era-you-cant-imagine-how-bad-it-was-back-in-1999/ 

https://totalinternalreflectionblog.com/2018/05/21/check-the-technique-a-short-guide-to-critical-reading-of-scientific-papers/ 

https://undsci.berkeley.edu/understanding-science-101/how-science-works/scrutinizing-science-peer-review/ 

https://www.linkedin.com/pulse/critical-consumers-scientific-literature-researchers-patients-savitz/ 

​​     ​References: 

Cohen, J. (1990). Things I have learned (So Far). The American Psychologist, 45(12), 1304–1312. DOI: 10.1037/0003-066X.45.12.1304 

Henrich, J., Heine, S., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33 (2-3), 61-83 DOI: 10.1017/S0140525X0999152X 

Sheard, J. (2018). Chapter 18 – Quantitative data analysis. In K. Williamson & G. Johanson (Eds.), Research Methods (2nd ed., pp. 429-452). Chandos Publishing. DOI: 10.1016/B978-0-08-102220-7.00018-2   

Filed Under: Evaluation Methodology Blog

Educational Leadership and Policy Studies

325 Bailey Education Complex
Knoxville, Tennessee 37996

Phone: 865-974-2214
Fax: 865.974.6146

The University of Tennessee, Knoxville
Knoxville, Tennessee 37996
865-974-1000

The flagship campus of the University of Tennessee System and partner in the Tennessee Transfer Pathway.

ADA Privacy Safety Title IX