• Request Info
  • Visit
  • Apply
  • Give
  • Request Info
  • Visit
  • Apply
  • Give

Search

  • A-Z Index
  • Map

Educational Leadership and Policy Studies

  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Programs
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Programs
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
Home » Page 4

Supporting Early Childhood Teacher Growth and Development with Actionable Context-Based Feedback

Supporting Early Childhood Teacher Growth and Development with Actionable Context-Based Feedback

Supporting Early Childhood Teacher Growth and Development with Actionable Context-Based Feedback

May 17, 2024 by Jonah Hall

By Dr. Mary Lynne Derrington & Dr. Alyson Lavigne

Please Note: This is the final part of a four-part series on actionable feedback. Stay tuned for the next posts that will focus on Leadership Content Knowledge (LCK) and teacher feedback in the areas of STEM, Literacy, and Early Childhood Education.

Missed the beginning of the series? Click here to read Part 1
on making teacher feedback count!

According to the Center on the Developing Child at Harvard University, in the first few years of life more than 1 million new neural connections form every second. With children experiencing rapid brain development from birth to age 8, early childhood education and the experiences in those settings are critical for building a foundation of lifelong learning and success. Thus, supporting the educators who teach these early learners is perhaps one of the best educational investments that any school leader can make. 

One way leaders support teachers is to observe classrooms and provide feedback. Maria Boeke Mongillo and Kristine Reed Woleck argue that those who observe and provide feedback to early childhood educators can leverage leadership content knowledge—knowledge about the principles of early childhood education—and apply that knowledge to the observation cycle and the context in which early childhood educators work.

To apply leadership content knowledge, school leaders should first be familiar with the National Association for the Education of Young Children (NAEYC) and their principles of child development and learning, most recently published in 2020. The list of nine principles includes the essential element of play for promoting joyful learning to foster self-regulation, language, cognitive and social competencies, and content knowledge across disciplines.

Once a school leader is familiar with NAEYC’s nine principles, they may consider applying early childhood-informed “look-fors” during an observation with related questions that can be used during a pre- or post-observation conference. Examples by area of teaching are provided below.

Learning Environment

Look-Fors

  • Flexible seating and work spaces allow for collaboration and social skills and language development
  • Physical space and furniture allow for movement and motor breaks

Pre- or Post-Observation Questions

  • How do your classroom environment and materials support your student learning outcomes?
Instructional Practices

Look-Fors

  • Opportunities for learning and embedded in play and collaborative experiences

Pre- or Post-Observation Questions

  • How might your students consolidate and extend their learning in this lesson through play opportunities?
Assessment

Look-Fors

  • Use of observations and interviews to assess student learning.

Pre- or Post-Observation Questions

  • What methods are you using to collect information about student learning during this lesson?

These systematic structures can be applied to make sure that observation and importantly, feedback to early childhood educators is meaningful and relevant. 

This blog entry is the last entry as part of a four-part series on actionable feedback. 

If this blog has sparked your interest and you want to learn more, check out our book, Actionable Feedback to PK-12 Teachers. And for other suggestions on supervising teachers in early childhood, see Chapter 10 by Maria Boeke Mongillo and Kristine Reed Woleck.

Missed the beginning of the series? Click here to read the first, second and third blog posts. 

Filed Under: News

Reflecting on a Decade After ESM: My Continuing Journey as an Evaluation Practitioner and Scholar

Reflecting on a Decade After ESM: My Continuing Journey as an Evaluation Practitioner and Scholar

May 15, 2024 by Jonah Hall

By Tiffany Tovey, Ph.D.

Greetings, fellow explorers of evaluation! I’m Tiffany Tovey, a fellow nerd​,​​ ​UTK alum​,​ and practitioner on a constantly evolving professional and personal journey, navigating the waters with a compass called reflective practice. Today, I’m thrilled to reflect together with you on the twists and turns of my journey as an evaluation practitioner and scholar in the decade since I defended my dissertation and offer some insights for you to consider in your own work. 

My Journey in Evaluation 

Beginning the unlearning process. The seeds of my journey into social science research were sown during my undergraduate years as a first-generation college student at UTK, where I pursued both philosophy and psychology for my bachelor’s degree. While learning about the great philosophical debates and thinkers, I was traditionally trained in experimental and social psychology under the mentorship of Dr. Michael Olson. This rigorous foundation of exploring knowledge and inquiry provided me with a foundational perspective on what was to come. I learned the importance of asking questions, embracing fallibilism, and appreciating the depth of what I now call reflective practice. Little did I know, this foundation really set the stage for my immersion into the world of evaluation, starting with the Evaluation, Statistics, and Methodology (ESM) program at UTK.  

Upon entering ESM for my Ph.D., I found myself in the messy, complex, and dynamic realm of applying theory to practice. Here, my classical training in positivist, certainty-oriented assumptions was immediately challenged (in ways I am still unlearning to this day), and my interests in human behavior and reflective inquiry found a new, more nuanced, context-oriented environment to thrive. Let me tell you about lessons I learned from three key people along the way: 

  • Communicating Data/Information: Statistics are tools for effectively communicating about and reflecting on what we know about​​ what is and (sometimes) why it is the way it is. Dr. Jennifer Ann Morrow played a pivotal role in shaping my understanding of statistics and its application in evaluation. Her emphasis on making complex statistical information accessible and meaningful to students, clients, and other audiences has stuck with me.  
     
    ​​As important as statistics are, so too are words—people’s lived experiences, which is why qualitative research is SO important in our work, something that all my instructors helped to instill in me in ESM. I can’t help it; I’m a word nerd. Whether qualitative or quantitative, demystifying concepts, constructs, and contexts, ​​outsmarting software and data analysis programs, and digesting and interpreting information in a way that our busy listeners can understand and make use of is a fundamental part of our jobs. 
  • Considering Politics a​​nd Evaluation Use: Under the mentorship of Dr. Gary Skolits, a retired ESM faculty member and current adjunct faculty member at UTK, I began to understand the intricate dances evaluators navigate in the realms of politics and the use of evaluation findings. His real-talk and guidance helped prepare me for the complexities of reflective practice in evaluation, which became the focus of my dissertation. Upon reflection, I see my dissertation work as a continuation of the reflective journey I began in my undergraduate studies, and my work with Gary as a fine-tuning and ​clarification of​ the critical role of self-awareness, collaboration, facilitation, and tact in the evaluation process. 
  • The Key Ingredient – Collaborative Reflective Practice: My journey was deepened by my engagement with Dr. John Peters, another now-retired faculty member from UTK’s College of Education Health and Human Sciences, who introduced me to the value of collaborative reflective practice through dialogue and systematic reflective processes. His teachings seeded my belief that evaluators should ​​facilitate reflective experiences for clients and collaborators, fostering deeper understandings, cocreated learning, and more meaningful outcomes (see the quote by John himself below… and think about the ongoing role of the evaluator during the lifecycle of a project). He illuminated the critical importance of connecting theory to practice through reflective practice—a transformative activity that occupies the liminal space between past actions and future possibilities. This approach encourages us to critically examine the complexities of practice, thereby directly challenging the uncritical acceptance of the status quo. 

My post-PhD journey. I currently serve as the director of the Office of Assessment, Evaluation, and Research Services and teach program evaluation, qualitative methods, reflective practice, interpersonal skills, and just-in-time applied research skills to graduate and undergraduate students at UNC Greensboro. Here, I apply my theoretical knowledge to real-world evaluation projects, managing graduate students and leading them on their professional evaluation learning journey. Each project and collaboration has been an opportunity to apply and refine my understanding of reflective practice, effective communication, and the transformative power of evaluation.  

​​​My role at UNCG has been a continued testament to the importance of reflective practice. The need for intentional reflective experiences runs throughout my role as a director of OAERS, lead evaluator and research on sponsored projects, mentorship and scaffolding with students, and as a teacher. Building in structured time to think, unpack questions and decisions together, and learn how to go on more wisely is a ubiquitous need. Making space for reflective practice means leveraging the ongoing learning and unlearning process that defines the contours of (1) evaluation practice, (2) evaluation scholarship, and (3) let’s be honest… life itself!  

Engaging with Others: The Heart of Evaluation Practice 

As evaluators, our work is inherently collaborative and human centered. We engage with diverse collaborators and audiences, each bringing their unique perspectives and experiences to the table. In this complex interplay of voices,​​ it’s essential that we—evaluators—foster authentic encounters that lead to meaningful insights and outcomes. 

In the spirit of Martin Buber’s philosophy, I try to approach my interactions with an open heart and mind, seeking to establish a genuine connection with those I work with. Buber reminds us that “in genuine dialogue, each of the participants really has in mind the other or others in their present and particular being and turns to them with the intention of establishing a living mutual relation between himself and them” (Buber, 1965, p. 22). This perspective is foundational to my practice, as it emphasizes the importance of mutual respect and understanding in creating a space for collaborative inquiry and growth. 

Furthermore, embracing a commitment to social justice is integral to my work as an evaluator. Paulo Freire’s insights resonate deeply with me:  

Dialogue cannot exist, however, in the absence of a profound love for the world and for people. The naming of the world, which is an act of creation and re-creation, is not possible if it is not infused with love. Love is at the same time the foundation of dialogue and dialogue itself. (Freire, 2000, p. 90) 

This principle guides me in approaching each evaluation project with a sense of empathy and a dedication to promoting equity and empowerment through my work. 

Advice for Emerging Evaluators 

  • Dive in and embrace the learning opportunities that come your way.  
  • Reflect on your experiences and be honest with yourself.  
  • Remember, evaluation is about people and contexts, not just techniques and tools.  
  • Leverage your unique personality and lived experience in your work. 
  • Never underestimate the power of effective, authentic communication… and networking. 
  • Most importantly, listen to and attend to others—we are a human-serving profession geared towards social betterment. Be in dialogue with your surroundings and those you are in collaboration with. View evaluation as a reflective practice, and your role as a facilitator of that process. Consider how you can leverage the perspectives of Buber and Freire in your own practice to foster authentic encounters and center social justice in your work. 

Conclusion and Invitation 

My journey as an evaluation scholar is a journey of continuous learning, reflection, and growth. As I look to the future, I see evaluation as a critical tool for navigating the complex challenges of our world, grounded in reflective practice and a commitment to the public good. To my fellow evaluators, both seasoned and emerging, let’s embrace the challenges and opportunities ahead with open minds and reflective hearts. And to the ESM family at UTK, know that I am just an email away (tlsmi32@uncg.edu), always eager to connect, share insights, and reflect further with ​​you. 

Filed Under: Evaluation Methodology Blog

How Do I Critically Consume Quantitative Research?

How Do I Critically Consume Quantitative Research?

May 1, 2024 by Jonah Hall

By Austin Boyd 

Every measurement, evaluation, statistics, and assessment (MESA) professional, whether they ​are​ established educators and practitioners or aspiring students, engages with academic literature in some capacity. Sometimes for work, ​​other times for pleasure, but always in the pursuit of new knowledge. But how do we as consumers of research determine whether the quantitative research we engage with is high quality? 

My name is Austin Boyd, and I am a researcher, instructor, and ESM alumni. ​​I have read my fair share of articles over the past decade and was fortunate enough to publish a few of my own. I have read articles in the ​​natural, formal, applied, and social sciences, and while they all shared the title of peer-reviewed publication, there was definitely variability in the quality of quantitative research from one article to the next. Initially, it was difficult for me to even consider the idea that a peer-reviewed publication would be anything less than perfect. However, as I have grown as a critical consumer of research, I have devised six questions to keep in mind when reading articles with quantitative analyses that allow me to remain objective in the face of exciting results. 

  1. ​​     ​What is the purpose of the article?  

The first question to keep in mind when reading an article is​,​ “​W​hat is its purpose?” Articles may state these in the form of research questions or even in the title by using words such as “empirical”, “validation”, and “meta-analysis”. While the purpose of an article has no bearing on its quality, it does impact the type of information a reader should expect to obtain from it. Do the research questions indicate that the article will be presenting new exploratory research on a new phenomenon or attempting to validate previous research findings? Remaining aware of the article’s purpose allows you to determine if the information is relevant and in the scope of what it should be providing. 

  1. What information is provided about obtaining participants and about the participants themselves? 

The backbone of quantitative research is data. In order to have any data, participants or cases must be found and measured for the phenomena of interest. These participants are all unique, and it is this uniqueness that needs to be disclosed to the reader. Information on the population of interest, how the selected participants were recruited, who they are, and why their results were or were not included in the analysis is essential for understanding the c​​ontext of the research. Beyond the article itself, the demographics of the participants are also important for planning future research. While research participants are largely Western, educated, industrialized, rich, and democratic societies (​​WEIRD; Henrich et al., 2010), it should not be assumed that this is the case for all research. The author(s) of an article should disclose demographic information of the participants, so the readers understand the context of the data and the generalizability of the results, and so that researchers can accurately replicate or expand the research to ne​​w contexts. 

  1. Do the analyses used make sense for the data and proposed research question(s)? 

In order to obtain results from the quantitative data collected, some form of analysis must be conducted. The most basic methods of exploring quantitative data are called statistics (Sheard, 2018). T​​he selected statistical analysis should align with the variables presented in the article and answer the research question(s) guiding the project. There is a level of subjectivity as to which statistical analysis should be used to analyze data. Variables measured on a nominal scale should not be used as the outcome variable when conducting analyses that look at the differences between group means, such as ​​t-tests and ANOVAs, while ratio scale variables should not be used to conduct analyses dealing with frequency distributions, such as chi-square tests. However, there are analyses which require the same variable types, making them seemingly interchangeable. For example, t-tests, logistic regressions, and point biserial analyses all use two variables, one continuous and one binary. However, each of these analyses addresses different research questions such as “Is there a difference between groups?”, “Can we predict an outcome?”, and “Is there a relationship between variables?”. While there is a level of subjectivity as to which statistical analysis can be used to analyze data, there are objectively incorrect analyses based on both the overarching research questions and the scale of measurement of the available variables in the data.  

  1. What results are provided? 

While a seemingly straightforward question, there is a lot of information that can be provided about a given analysis. The most basic, and least informative, is a blanket statement about the statistical significance. Even if there is no statistically significant result to report, a blanket statement is not sufficient information about the analyses with all the different values that can be reported for each analysis. For example, a t-test has a t value, degrees of freedom, p value, confidence interval, power level, and effect size, all of which provide valuable information about the results. While having some of these values does allow the reader to calculate the missing ones, the onus should not be put on the reader to do so (Cohen, 1990). Additionally, depending on the type of statistical analysis chosen, additional tests of the data must be conducted to determine if the data meets the assumptions necessary for the analysis. The results of these tests of assumptions and the decisions made based on them should be reported and supported by the existing literature. 

  1. Is there any discussion of limitations? 

Almost every article has limitations in some form or other, which should be made known to the reader. If an article didn’t have any limitations, the author would make a point to state as much. Limitations include limits to the generalizability of the findings, confounding variables, or simply time constraints. While these might seem negative, they are not immediate reasons to discredit an article entirely. As was the case for the demographics, the limitations provide further context about the research. They can even be useful in providing direction for follow-up studies in the same way a future research section would.  

  1. Do you find yourself still having questions after finishing the article?  

The final question to keep in mind once you have finished reading an article is “Do you still have questions?” At the end of an article, you shouldn’t find yourself needing more information about the study. You might want to know more about the topic or similar research, but you shouldn’t be left wondering about pieces of the research design or other methodological aspects of the study. High-quality research deserves an equally high-quality article, which includes ample information about every aspect of the study. 

While not an exhaustive list, these six questions are designed to provide a starting point for determining if research with quantitative data is of high quality. Not all research is peer-reviewed, including conference presentations, blog posts, and white papers, and simply being peer-reviewed does not make a publication infallible. It is important to understand how to critically consume research in order to successfully navigate the ever-expanding body of scientific research. 

​​​​     ​Additional Resources: 

https://blogs.lse.ac.uk/impactofsocialsciences/2016/05/09/how-to-read-and-understand-a-scientific-paper-a-guide-for-non-scientists/  

https://statmodeling.stat.columbia.edu/2021/06/16/wow-just-wow-if-you-think-psychological-science-as-bad-in-the-2010-2015-era-you-cant-imagine-how-bad-it-was-back-in-1999/ 

https://totalinternalreflectionblog.com/2018/05/21/check-the-technique-a-short-guide-to-critical-reading-of-scientific-papers/ 

https://undsci.berkeley.edu/understanding-science-101/how-science-works/scrutinizing-science-peer-review/ 

https://www.linkedin.com/pulse/critical-consumers-scientific-literature-researchers-patients-savitz/ 

​​     ​References: 

Cohen, J. (1990). Things I have learned (So Far). The American Psychologist, 45(12), 1304–1312. DOI: 10.1037/0003-066X.45.12.1304 

Henrich, J., Heine, S., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33 (2-3), 61-83 DOI: 10.1017/S0140525X0999152X 

Sheard, J. (2018). Chapter 18 – Quantitative data analysis. In K. Williamson & G. Johanson (Eds.), Research Methods (2nd ed., pp. 429-452). Chandos Publishing. DOI: 10.1016/B978-0-08-102220-7.00018-2   

Filed Under: Evaluation Methodology Blog

Engaging Students in Online, Asynchronous Courses: Strategies for Success

Engaging Students in Online, Asynchronous Courses: Strategies for Success

April 15, 2024 by Jonah Hall

By S. Nicole Jones, Ph.D. 

Hello! My name is Nicole Jones, and I am a 2022 graduate of the Evaluation, Statistics, and Methodology (ESM) PhD program at the University of Tennessee, Knoxville (UTK). I currently work as the Assessment & Accreditation Coordinator in the College of Veterinary Medicine (CVM) at the University of Georgia (UGA). I also teach online, asynchronous program evaluation classes for UTK’s Evaluation, Statistics, & Methodology PhD and Evaluation Methodology MS programs. My research interests include the use of artificial intelligence in evaluation and assessment, competency-based assessment, and outcomes assessment. 

Prior to teaching part-time for UTK, I served as a graduate teaching assistant in two online, synchronous ESM classes while enrolled in the PhD program: Educational Research and Survey Research. In addition, I taught in-person first-year seminars to undergraduates for many years in my previous academic advising roles. However, it wasn’t until I became involved in a teaching certificate program offered by UGA’s CVM this year that I truly began to reflect more on my own teaching style, and explore ways to better engage students, especially in an online, asynchronous environment. For those who are new to teaching online classes or just need some new ideas, I thought it would be helpful to share what I’ve learned about engaging students online.  

Online Learning 

While many online courses meet synchronously, meaning they meet virtually at a scheduled time through platforms like Zoom or other Learning Management Software (LMS) tools, there are also online classes that have no scheduled meeting times or live interactions. These classes are considered to be asynchronous. If you have taken an online, asynchronous course, you likely already know that it can be easy to forget about the class, primarily because there is no scheduled class time to keep you on track. When I worked as an academic advisor, I would often encourage my students who registered for these types of courses to go ahead and set aside certain days or times of the week to devote to those classes. Many college students struggle with time management, especially in the first-year, so this was one way to help them stay engaged in the class and up to date with assignments. While it is certainly important for students to show up (or log in) and participate, it’s even more important for instructors to create an online environment that will motivate students to do so. As discussed by Conrad and Donaldson (2012), online engagement is related to student participation and interaction in the classroom, and learning in the classroom (online or in-person) rests upon the instructor’s ability to create a sense of presence and engage students in the learning process. The key to engaging online learners is for students to be engaged and supported so they take responsibility for their own learning (Conrad & Donaldson, 2012). So, how might you create an engaging online environment for students?  

Engaging Students in Online Classes 

Below are some strategies I currently use to engage students in my online, asynchronous program evaluation classes:  

  • Reach out to the students prior to the start of class via welcome email 
  • Post information about myself via an introduction post – also have students introduce themselves via discussion posts 
  • Develop a communication plan – let students know the best way to get in touch with me 
  • Host weekly virtual office hours – poll students about their availability to find the best time 
  • Clearly organize the course content by weekly modules 
  • Create a weekly checklist and/or introduction to each module 
  • Use the course announcements feature to send out reminders of assignment due dates  
  • Connect course content to campus activities, workshops, events, etc.  
  • Utilize team-based projects 
  • Provide opportunities for students to reflect on learning (i.e., weekly reflection journals) 
  • Provide feedback on assignments in a timely manner 
  • Allow for flexibility and leniency  
  • Reach out to students who miss assignment due dates – offer to meet one-on-one if needed 

In addition to these strategies, the Center for Teaching and Learning at Northern Illinois University has an excellent website with even more recommendations for increasing student engagement in online courses. Their recommendations focus on the following areas: 1) set expectations and model engagement, 2) build engagement and motivation with course content and activities, 3) initiate interaction and create faculty presence, 4) foster interaction between students and create a learning community, and 5) create an inclusive environment. I also recommend checking your current institution’s Center for Teaching and Learning to see if they have tips or suggestions as they may be more specific for the LMS your institution uses. Lastly, you may find the following resources helpful if you wish to learn more about student engagement and online teaching and learning. 

Helpful Resources 

American Journal of Distance Education: https://www.tandfonline.com/toc/hajd20/current  

Fostering Connection in Hybrid & Online Formats:
https://www.ctl.uga.edu/_resources/documents/Fostering-Connection-in-Hybrid-Online-Formats.pdf  

Conrad, R. M., & Donaldson, J. A. (2012). Continuing to Engage the online Learner: More Activities and Resources for Creative Instruction. San Francisco, CA: Jossey Bass.  

Groccia, J. E. (2018). What is student engagement? New Directions for Teaching and Learning, 154, 11-20.  

How to Make Your Teaching More Engaging: Advice Guide 

https://www.chronicle.com/article/how-to-make-your-teaching-more-engaging/?utm_source=Iterable&utm_medium=email&utm_campaign=campaign_3030574_nl_Academe-Today_date_20211015&cid=at&source=ams&sourceid=&cid2=gen_login_refresh 

 How to Make Your Teaching More Inclusive:  

https://www.chronicle.com/article/how-to-make-your-teaching-more-inclusive/ 

Iowa State University Center for Excellence in Learning and Teaching: https://www.celt.iastate.edu/learning-technologies/engaging-students/ 

Khan, A., Egbue, O., Palkie, B., & Madden, J. (2017). Active learning: Engaging students to maximize learning in an online course. The Electronic Journal of e-Learning, 15(2), 107-115. 

Lumpkin, A. (2021). Online teaching: Pedagogical practices for engaging students synchronously and asynchronously. College Student Journal, 55(2), 195-207. 

Northern Illinois University Center for Teaching and Learning. (2024, March 1). Recommendations to Increase Student Engagement in Online Classes. https://www.niu.edu/citl/resources/guides/increase-student-engagement-in-online-courses.shtml.   

Online Learning Consortium: https://onlinelearningconsortium.org/read/olc-online-learning-journal/  

Watson, S., Sullivan, D. P., & Watson, K. (2023). Teaching presence in asynchronous online classes: It’s not just a façade. Online Learning, 27(2), 288-303. 

Filed Under: Evaluation Methodology Blog

Careers in Program Evaluation: Finding and Applying for a Job as a Program Evaluator

Careers in Program Evaluation: Finding and Applying for a Job as a Program Evaluator

April 1, 2024 by Jonah Hall

By Jennifer Ann Morrow, Ph.D. 

Introduction: 

Hi! My name is Jennifer Ann Morrow and I’m an Associate Professor in Evaluation Statistics and Methodology at the University of Tennessee-Knoxville. I have been training emerging assessment and evaluation professionals for the past 22 years. My main research areas are training emerging assessment and evaluation professionals, higher education assessment and evaluation, and college student development. My favorite classes to teach are survey research, educational assessment, program evaluation, and statistics. 

What’s Out There for Program Evaluators? 

What kind of jobs are out there for program evaluators? What organizations hire program evaluators? Where should I start my job search? What should I submit with my job application? These are typical questions my students ask me as they are considering joining the evaluation job market. Searching for a job can be overwhelming and with so many resources and websites available it can be easy to get lost within all of the information when searching for a job. Here are some strategies that I share with my students as I help them navigate the program evaluation job market, I hope you find them helpful! 

First, I ask the student to describe their skills/competencies that they have and what types of skills they believe they are strong in (and hopefully enjoy doing!). In our program we use the American Evaluation Association Competencies (https://www.eval.org/About/Competencies-Standards/AEA-Evaluator-Competencies) in a self-assessment where we have students rate how confident they are in their ability to perform each competency. We have them rate themselves and provide strategies for how to remedy deficiencies each year that they are in our program. Conducting a self-assessment of your skills/competencies and strengths and weaknesses is a great way to help figure out what types of jobs best fit your skillset. It is also helpful when crafting a cover letter! Check out the resources for additional examples of self-assessments! 

Second, I have students create/update their curriculum vita (CV) and resume. Depending on the jobs that they plan on applying for they may need a CV or a resume. I tell them to use the information from their skills self-assessment and their graduate program of study to craft their CV/resume. I also have them develop a general cover letter (these should be tailored for each specific job) that showcases their experience, skills, and relevant work products. There are a ton of resources available online (see some listed below) and I share with them example CVs/resumes and cover letters from some of our graduates. I also encourage them to get feedback on these from faculty and peers before using them in a job application. 

Third, I encourage students to develop (or clean up current ones) a social media presence. I highly recommend creating a LinkedIn profile (My LinkedIn Profile). Make sure on your profile that you showcase your skills, education, experiences and make connections with others in the Program Evaluation field. LinkedIn is also a great place to search for evaluation jobs! I also recommend to students to create an academic website (Dr. Rocconi’s Website). On your website you go into more detail about your experiences, share work products (e.g., publications, presentations, evaluation reports). Make sure you put your LinkedIn and website links at the top of your CV/resume! 

Fourth, I provide my students tips for where and how to search for program evaluation jobs. I encourage them to draft relevant search terms (e.g., program evaluator, evaluation specialist, program analyst, data analyst) and make a list of job sites (see resources for some of my favorites!) that you are going to use to search for jobs. For a lot of these job sites you can search for key terms, job title, location, salary, etc. to help narrow down the results. Also, for many of these job sites you can sign up for job alerts based on your search terms where they will send you an email when a new job fits your search terms. I also encourage students to join their major professional organizations (e.g., AEA) and sign up for their newsletter or listserv as many job opportunities are posted there. 

Lastly, I tell students to create an organized job search plan. I typically do this in Excel but you can organize your information in a variety of formats and platforms. I create an Excel file that contains all of the jobs that I apply for (i.e., name of organization, link to job ad, contact information, date applied) and a list of when/where I am searching for job. When I was actively searching for jobs I dedicated time each week to go through listserv emails and search job sites for relevant jobs to apply for. I then updated my excel file each week during my search. It helps to keep things organized in case you need to follow-up with organizations regarding the status of your application. 

So, good luck on your job search and I hope that my tips and resources are helpful as you start your journey to becoming a program evaluator! 

 

Resources 

American Evaluation Association Competencies: https://www.eval.org/About/Competencies-Standards/AEA-Evaluator-Competencies  

Article about How to Become a Program Evaluator: https://www.evalcommunity.com/careers/program-evaluator/ 

Article about Program Evaluation Careers: https://money.usnews.com/money/careers/articles/2008/12/11/best-kept-secret-career-program-evaluator 

Article about Program Evaluation Jobs: https://www.evalcommunity.com/job-search/program-evaluation-jobs/ 

Creating a LinkedIn Profile: https://blog.hubspot.com/marketing/linkedin-profile-perfection-cheat-sheet  

Creating an Academic Website: https://theacademicdesigner.com/2023/how-to-make-an-academic-website/  

Evaluator Competencies Assessment: https://www.khulisa.com/wp-content/uploads/sites/9/2021/02/2020-Evaluator-Competencies-Assessment-Tool-ECAT_Final_2020.07.27.pdf  

Evaluator Qualities: https://www.betterevaluation.org/frameworks-guides/managers-guide-evaluation/scope-evaluation/determine-evaluator-qualities 

Evaluator Self-Assessment: https://www.cdc.gov/evaluation/tools/self_assessment/evaluatorselfassessment.pdf  

Program Evaluation Curriculum Vita Tips: https://wmich.edu/sites/default/files/attachments/u1158/2021/Showcasing%20Your%20Eval%20Competencies%20in%20Your%20Resume%20or%20Vita%20for%20PDF.pdf  

Program Evaluation Resume Tips: https://www.zippia.com/program-evaluator-jobs/skills/#  

Resume and CVs Resources: https://www.careereducation.columbia.edu/topics/resumes-cvs  

Resume and Job Application Resources: https://academicguides.waldenu.edu/careerservicescenter/resumesandmore  

Six C’s of a Good Evaluator: https://www.evalacademy.com/articles/2019/9/26/what-makes-a-good-evaluator  

UTK’s Evaluation Methodology MS program (distance ed): https://volsonline.utk.edu/programs-degrees/education-evaluation-methodology-ms/ 

AAPOR Jobs: https://jobs.aapor.org/jobs/?append=1&quick=industry%7Csurvey&jas=3 

American Evaluation Association Job Bank: https://careers.eval.org/ 

Evaluation Jobs: https://evaluationjobs.org/ 

Higher Ed Jobs: https://www.higheredjobs.com/ 

Indeed.com: https://www.indeed.com/ 

Monitoring and Evaluation Career Website: https://www.evalcommunity.com/ 

NCME Career Center: https://www.ncme.org/community/community-network2/careercenter 

USA Government Job Website: https://www.usajobs.gov/ 

 

Filed Under: Evaluation Methodology Blog

Brian Mells Recognized as Field Award Recipient

Brian Mells Recognized as Field Award Recipient

March 25, 2024 by Jonah Hall

Dr. Brian Mells, Principal at Whites Creek High School in Metro Nashville Public Schools named as recipient of William J. and Lucille H. Field Award for Excellence in Secondary Principalship for the State of Tennessee.

The Field Award was established to recognize one outstanding secondary school leader each year who demonstrates leadership excellence through commitment to the values of civility, candor, courage, social justice, responsibility, compassion, community, persistence, service, and excellence. Administered by the College of Education, Health, and Human Sciences at the University of Tennessee, the Field Award identifies a Tennessee secondary school principal whose life and work are characterized by leadership excellence and encourages secondary school principals to pause and reflect upon their current leadership practice and to consider their experience, challenges, and opportunities in light of the personal values that they embody. 

 

The Field Award recipient for this year is Dr. Brian Mells. Principal of Whites Creek High School in Metro Nashville Public Schools. A secondary principal since 2016, Dr. Mells holds a Bachelor’s degree from The University of Tennessee, a Master’s from Tevecca Nazarene University, an EdS and an EdD from Carson-Newman University.  During Dr. Mellls’ tenure at Whites Creek High School, he has led his campus to excellence by supporting academic rigor and student achievement, and by strengthening positive relationships with all stakeholders. Dr. Mells is an exceptional school leader who has taken the initiative to implement numerous programs on his campus, inspire instructional innovation, and improve student achievement. Dr. Mells stated that his “core belief of [his] leadership is that all students can achieve and grow academically, socially, and emotionally, when the appropriate systems and structures are in place for them to be successful. 

Dr. Mells is an innovative school leader who is passionate about developing collective efficacy and collective accountability among his faculty and staff to ensure that they achieve excellence for all stakeholders. Under Dr. Mells’ leadership, Whites Creek High School was able to increase all academic achievement outcomes for all students and earn an overall composite TVAAS of Level 5 for the first time in school history and has maintained that status for the past two years. Dr. Mells was nominated for the Field Award by MNPS superintendent, Adrienne Battle and endorsed by the Chief of Innovation, Renita Perry. Perry commented, “Dr. Mells is an innovative school leader who is passionate about developing collective efficacy and collective accountability among his faculty and staff to ensure that they achieve excellence for all stakeholders.” The department of Educational Leadership and Policy Studies at the University of Tennessee is proud to name Dr. Mells as this year’s Field Award Winner. Congratulations, Dr. Brian Mells! 

Filed Under: News

How My Dissertation Came to be through ESM’s Support and Guidance

How My Dissertation Came to be through ESM’s Support and Guidance

March 15, 2024 by Jonah Hall

By Meredith Massey, Ph.D. 

Who I am

Greetings! I’m Dr. Meredith Massey. I finished my PhD in Evaluation, Statistics, and Methodology (ESM) at UTK in the Fall of 2023. In addition to my PhD in ESM, I also completed graduate certificates in Women, Gender, and Sexuality and Qualitative Research Methods in Education. While I was a part-time graduate student, I also worked full-time as an evaluation associate at Synergy Evaluation Institute, a university-based evaluation center. By day, I worked for clients evaluating their STEM education and outreach programs. By night, I was an emerging scholar in ESM. During my time in the program,      my research interests grew to include andragogical issues in applied research methods courses, classroom measurement and assessment, feminist research methods, and evaluation.

How my dissertation came to be

In the ESM program, students can choose to complete a three-manuscript dissertation rather than a traditional five-chapter dissertation. When it came time to start deciding what my dissertation would look like, my faculty advisor, Dr. Leia Cain, suggested I consider the three-manuscript option. As someone who has varied interests, this idea appealed to me because it allowed me the flexibility to work on three separate but related studies. My dissertation flowed from a research internship that I completed with Dr. Cain. I interviewed qualitative faculty about their assessment beliefs and practices within their qualitative methods courses. I wrote a journal article on that study to serve as my comprehensive exam writing requirement. Using my original internship study as the basis for my first dissertation manuscript was an expedient strategy as it allowed me to structure my second and third manuscripts on the findings of my first study. I presented my ideas for my second and third manuscripts to my committee in my dissertation proposal, accepted their feedback on how to proceed with my studies and then got to work.

Dissertation topic and results

In my multi-paper dissertation entitled “Interviews, rubrics and stories (Oh my!): My journey through a three-manuscript dissertation,” I chose to center faculty and students’ perspectives on assessment and learning. To that end, my first and second research studies both focused on those two issues, while the third paper went further into exploring the students’ perspective through my story of the parallel formations of my scholarly identity and my new identity as a part of a married couple. In the first study, “Interviewing the Interviewers: How qualitative faculty assess interviews,” I reported how faculty use interview assignments in their courses and how they engage with assessment tools such as rubrics for those interview assignments. We learned that the faculty view interview assignments as the best and most comprehensive assignment their students can complete to give them experience as qualitative researchers. Regarding assessment tools such as rubrics, while instructors had differing opinions on whether rubrics were an appropriate tool to use in their assessment practices, all the instructors believed that giving students feedback was an essential assessment practice. My findings in that manuscript helped shape the plan to implement the second study. In “I can do qualitative research: Using student-designed rubrics to teach interviewing,” I detailed testing out an innovative student-created rubric for an interview assignment in an introductory qualitative research methods course and used student reflections as the basis for writing an ethnodrama about how students experience their first interview assignments and how they engaged with their rubric. From this study, we learned that students grew in their confidence in conducting interviews, experienced a transformation in their paradigm, and were conflicted about using the student-designed rubric in that some students found it useful, and some did not. Both manuscripts informed my third manuscript, an autoethnography detailing the parallel transitions in my identity from an evaluator to a scholar and my identity from a single person to a married person. I wrote interweaving stories chronicling the parallels between the similar and contrasting methods I use as an evaluator and researcher, how this tied into my growing identity as a scholar, and the similarities and contrasts of how I’ve noticed my identity has been changing throughout my engagement and being newly married to my longtime boyfriend, now husband. These studies contributed valuable knowledge to the existing, though limited, andragogical literature on qualitative research methods. My hope going forward is that qualitative faculty continue this focus, beginning conversations about their classroom assessments to complete their own andragogical studies determining the impact of their teaching on their students’ learning.

What’s next?

            Now that I’m finished with my dissertation and my studies, I am happy to report that I have accepted a promotion at my job at Synergy Evaluation Institute, and I’ve also been given the opportunity to teach qualitative research methods courses as an adjunct in the ESM program. I’m excited to continue being associated with the program and teach future ESM students. Being in the ESM program at UTK, while difficult at times, has also been a joy. The ESM program encouraged me to explore my varied interests and ultimately supported me as I grew professionally as an evaluator and scholar. The program accommodated and respected me as a working professional, and I highly recommend the program to any student with an interest in working with data as an evaluator, assessment professional, statistician, qualitative researcher, faculty, or all of the above. There’s a place for all in ESM.

Resources

Journal article citation

Massey, M.C., & Cain, L.K. (In press). Interviewing the interviewers: How qualitative faculty assess interviews. The Qualitative Report.

Books specifically about Qualitative Research Methods Andragogy

Eisenhart, M., & Jurow, A. S. (2011). Teaching qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (4th ed., pp. 699-714). Sage.

Hurworth, R. E. (2008). Teaching qualitative research: Cases and issues. Sense Publishers.

Swaminathan, R., & Mulvihill, T. M. (2018). Teaching qualitative research: Strategies for engaging emerging scholars. Guilford Publications.

Books to read to become familiar with Ethnodrama as a method

Leavy, P. (2015). Handbook of Arts-Based Research (2nd ed.). The Guilford Press.

Leavy, P. (2018). Handbook of Arts-Based Research (3rd ed.). The Guilford Press.

Saldana, J. (2016.) Ethnotheatre: Research from page to stage. Routledge. http://doi.org/10.4324/9781315428932

Most useful citations to become familiar with autoethnography as a method

Cooper, R., & Lilyea, B. V. (2022). I’m Interested in Autoethnography, but How Do I Do It?. The Qualitative Report, 27(1), 197-208. https://doi.org/10.46743/2160-3715/2022.5288

Ellis, C. (2004). The Ethnographic I: a methodological novel about
autoethnography
. AltaMira.

Ellis, C. (2013). Carrying the torch for autoethnography. In S. H. Jones, T. E. Adams., and C. Ellis (eds.) Handbook of Autoethnography (pp. 9-12). Left Coast Press.

Filed Under: Evaluation Methodology Blog

Introducing the Evaluation Methodology MS Program at UTK!

Introducing the Evaluation Methodology MS Program at UTK!

March 1, 2024 by Jonah Hall

By Dr. Jennifer Ann Morrow 

Hi everyone! My name is Dr. Jennifer Ann Morrow, and I’m the program coordinator for the University of Tennessee at Knoxville’s new distance education master’s program in Evaluation Methodology. I’m happy to announce that we are currently taking applications for our first cohort that will start in Fall 2024. In a world driven by data, the EM Master’s program gives you the skills to make evidence-based decisions!  

So Why Should You Join Our Program? 

Fully Online Program 

Our new program is designed for the working professional, all courses are fully online and asynchronous which enables students to complete assignments at times convenient for them. Although our courses are asynchronous our faculty offer optional weekly synchronous student hours/help sessions to offer additional assistance and mentorship. Students also participate in both group and individual advising sessions each semester where students will receive mentorship, practical experience suggestions, and career exploration guidance.  

Applied Coursework 

Our 30-credit program is designed to be completed in just under 2 years (5 semesters, only 2 courses per semester!). Each class is designed to include hands-on applied experiences on the entire program evaluation process such as evaluation design, data collection, data analysis, and data dissemination. In their first-year, students will take a two-semester program evaluation course sequence, statistics 1, introduction to qualitative research 1, evaluation designs and data collection methods, and an elective. In their second-year students will take survey research, dissemination evaluation results, and a two-semester evaluation practicum course sequence where they will finalize a portfolio of their evaluation experiences to fulfil their comprehensive exam requirements. If students are unable to take 6 credits a semester, they have up to 6 years to complete their degree if they want to go at a slower pace.  

Experienced Faculty 

Our faculty are experienced educators! All faculty work as evaluators or in a related job such as assessment professional, applied researcher, or psychometrician. They are dedicated faculty that understand what skills and competencies are needed in the evaluation field and ensure that these are focused on within their classes. All are actively involved in their professional organizations (e.g., American Evaluation Association, American Psychological Association, Association for the Assessment of Learning in Higher Education, Association for Institutional Research) and publish their scholarly work in peer-reviewed journals.  

How to Apply 

It’s easy to apply! Go to the UTK Graduate Admissions Portal (https://apply.gradschool.utk.edu/apply/) and fill out your application. You need 2-3 letters of recommendation (complete the contact information and UTK will reach out to them to complete a recommendation), college transcripts, a goals statement (a letter introducing yourself and why you want to join our program) and submit your application fee. No GRE scores are needed! Applications are due by July 1st of each year (though we will review them early if you submit before then!). Tuition is $700 per graduate credit ($775 for out of state). 

 

Contact Me for More Information 

If you have any questions about our program just reach out! 

 

Jennifer Ann Morrow Ph.D.
jamorrow@utk.edu
(865)-974-6117
https://cehhs.utk.edu/elps/people/jennifer-ann-morrow-phd/

Helpful Resources 

Evaluation Methodology Program Website: https://cehhs.utk.edu/elps/evaluation-methodology-ms/  

Evaluation Methodology Program VOLS Online Website: https://volsonline.utk.edu/programs-degrees/education-evaluation-methodology-ms/  

Evaluation Methodology Program Student Handbook: https://cehhs.utk.edu/elps/wp-content/uploads/sites/9/2023/11/EM-MASTERS-HANDBOOK-2023.pdf  

UTK Educational Leadership and Policy Studies Website: https://cehhs.utk.edu/elps/  

UTK Educational Leadership and Policy Studies Facebook Page: https://www.facebook.com/utkelps/?ref=embed_page  

UTK Graduate School Admissions Website: https://gradschool.utk.edu/future-students/office-of-graduate-admissions/applying-to-graduate-school/  

UTK Graduate School Admission Requirements: https://gradschool.utk.edu/future-students/office-of-graduate-admissions/applying-to-graduate-school/admission-requirements/  

UTK Graduate School Application Portal: https://apply.gradschool.utk.edu/apply/  

UTK Distance Education Graduate Fees: https://onestop.utk.edu/wp-content/uploads/sites/9/sites/63/2023/11/Spring-24-GRAD_Online.pdf  

UTK Graduate Student Orientations: https://gradschool.utk.edu/future-students/graduate-student-orientations/  

American Evaluation Association: https://www.eval.org/ 

AEA Graduate Student and New Evaluator TIG: https://www.facebook.com/groups/gsnetig/ 

Filed Under: Evaluation Methodology Blog

Evaluation Capacity Building: What is it, and is a Job Doing it a Good Fit for Me?

Evaluation Capacity Building: What is it, and is a Job Doing it a Good Fit for Me?

February 15, 2024 by Jonah Hall

By Dr. Brenna Butler

Hi, I’m Dr. Brenna Butler, and I’m currently an Evaluation Specialist at Penn State Extension (https://extension.psu.edu/brenna-butler). I graduated from the ESM Ph.D. program in May 2021, and in my current role, a large portion of my job involves evaluation capacity building (ECB) within Penn State Extension. What does ECB specifically look like day-to-day, and is ECB a component of a job that would be a good fit for you? This blog post will cover some of my thoughts and opinions of what ECB may look like in a job in general. Keep in mind that these opinions are exclusively mine, and don’t represent those of my employer.

Evaluation capacity building (ECB) is the process of increasing the knowledge, skills, and abilities of individuals in an organization to conduct quality evaluations. This is often done by evaluators (like me!) providing the tools and information for individuals to conduct sustained evaluative practices within their organization (Sarti et al., 2017). The amount of literature covering ECB is on the rise (Bourgeois et al., 2023), indicating that evaluators taking on ECB roles within organizations may also be increasing. Although there are formal models and frameworks in the literature that describe ECB work within organizations (the article by Bourgeois and colleagues (2023) provides an excellent overview of these), I will cover three specific qualities of what it takes to be involved in ECB in an organization.

ECB Involves Teaching

Much of my role at Penn State Extension is providing mentorship to Extension Educators on how to incorporate evaluation in their educational programming. This mentorship role sometimes looks like a more formal teaching role by conducting webinars and training on topics such as writing good survey questions or developing a logic model. Other times, this mentorship role will take a more informal teaching route when I am answering questions Extension Educators email me regarding data analysis or ways to enhance their data visualizations for a presentation. Enjoying teaching and assisting others in all aspects of evaluations are key qualities of an effective evaluator who leads ECB in an organization.

ECB Involves Leading

Taking on an ECB role involves a large amount of providing guidance and being the go-to expert on evaluation within the organization. Individuals will often look to the evaluator in these positions as to what directions to take in evaluation and assessment projects. This requires speaking up in meetings to advocate for strong evaluative practices (“Let’s maybe not send out a 30-question survey where every single question is open-ended”). Being willing to speak up and go against the norms of “how the organization has always done something” is an area that an evaluator involved in ECB work needs to be comfortable doing.

One way this “we’ve always done it this way” mentality can be tackled by evaluators is through an evaluation community of practice. Each meeting is held around a different evaluation topic area where members of the organization are invited to talk about what has worked well for them and what hasn’t in that area and showcase some of the work they have conducted through collaboration with the evaluator. The intention is that these community of practice meetings that are open to the entire organization can be one way of moving forward with adopting evaluation best practices and leaning less on old habits.

ECB Involves Being Okay with “Messiness”

An organization may invest in hiring an evaluation specialist who can guide the group to better evaluative practices because they lack an expert in evaluation. If this is the case, evaluation plans may not exist, and your role as an evaluator in the organization will be to start from scratch in developing evaluative processes. Alternatively, it could be that evaluations have been occurring in the organization but may not be following best practices, and you will be tasked with leading the efforts to improve these practices.

Work in this scenario can become “messy” in the sense that tracking down historical evaluation data collected before an evaluator was guiding these efforts in the organization can become very difficult. For example, there may not be a centralized location or method to how paper survey data were being stored. One version of the data may involve tally marks on a sheet of paper indicating the number of responses to each question, and another version of the same survey data may be stored in an Excel file with unlabeled rows. These scenarios require adequate discernment by the evaluator if the historical data are worth combing through and combining so that they can be analyzed, or if starting from scratch and collecting new data will ultimately save time and effort. Being part of ECB in an organization involves being up for the challenge of working through these “messy”, complex scenarios.

Hopefully, this provided a brief overview of some of the work done by evaluators in ECB within organizations and can help you discern if a position involving ECB may be in your future (or not!).

 

Links to Explore for More Information on ECB

https://www.betterevaluation.org/frameworks-guides/rainbow-framework/manage/strengthen-evaluation-capacity

https://www.oecd.org/dac/evaluation/evaluatingcapacitydevelopment.htm

http://www.pointk.org/client_docs/tear_sheet_ecb-innovation_network.pdf

https://wmich.edu/sites/default/files/attachments/u350/2014/organiziationevalcapacity.pdf

https://scholarsjunction.msstate.edu/cgi/viewcontent.cgi?article=1272&context=jhse

 

References

Bourgeois, I., Lemire, S. T., Fierro, L. A., Castleman, A. M., & Cho, M. (2023). Laying a solid foundation for the next generation of evaluation capacity building: Findings from an integrative review. American Journal of Evaluation, 44(1), 29-49. https://doi.org/10.1177/10982140221106991

Sarti, A. J., Sutherland, S., Landriault, A., DesRosier, K., Brien, S., & Cardinal, P. (2017). Understanding of evaluation capacity building in practice: A case study of a national medical education organization. Advances in Medical Education and Practice, 761-767. https://doi.org/10.2147/AMEP.S141886

Filed Under: Evaluation Methodology Blog

Supporting Literacy Teachers with Actionable Content-Based Feedback

Supporting Literacy Teachers with Actionable Content-Based Feedback

February 6, 2024 by Jonah Hall

By Dr. Mary Lynne Derrington & Dr. Alyson Lavigne 

Please Note: This is Part 3 of a four-part series on actionable feedback. Stay tuned for the next posts that will focus on Leadership Content Knowledge (LCK) and teacher feedback in the areas of STEM, Literacy, and Early Childhood Education.

Missed the beginning of the series? Click here to read Part 1
on making teacher feedback count!

A strong literacy foundation in students’ early years is critical for success in their later ones. School leadership plays a significant part in establishing this foundation by equipping teachers with the right professional development.

Many (but not all) school leaders are versed in effective literacy instruction. Given its foundational importance, it is wise for principals — and others who observe and mentor teachers — to leverage the key elements of effective literacy instruction in the observation cycle. In this blog post, we outline two ways to do so.

Jan Dole, Parker Fawson, and Ray Reutzel suggest that one way to use research-based supervision and feedback practices in literacy instruction is to include in the observation cycle tools, guides, and checklists that specifically focus on literacy instruction, such as:

  • The Protocol for Language Arts Teaching Observations (PLATO; Grossman, 2013)
  • The Institute of Education Sciences’ (IES) K-3 School Leader’s Literacy Walkthrough Guide (Kosanovich et al., 2015)
  • The Institute of Education Sciences’ (IES): Grades 4-12 School Leaders Literacy Walkthrough Guide (Lee et al., 2020)

These tools highlight key concepts or what can be called “look-fors” of literacy rich environments by using a rubric or checklist. Some examples follow:

  • Strategy Use and Instruction: The teacher’s ability to teach strategies and skills that supports students in reading, writing, speaking, listening, and engaging with literature (PLATO)
  • Literacy Texts: Retell familiar stories, including key details (IES K-3; Kosanovich et al., 2015)
  • Vocabulary and Advanced Word Study: Explicit instruction is provided in using context clues to help students become independent vocabulary learners using literary and content area text (IES 4-12; Lee et al., 2020)

A second way is to develop professional learning communities (PLCs) to extend literacy supervision and feedback. Successful literacy-focused PLCs:

  • Establish a shared literacy mission, vision, values, and goals,
  • engage in regular collective inquiry on evidence-based literacy practices, and
  • promote continuous literacy instruction improvement among staff.

These strategies can be used by school leaders or complement the work of a school literacy coach. Ready to create a learning community in your school or district? Read KickUp’s tips for setting PLCs up for success.

This blog entry is part of a four-part series on actionable feedback. Stay tuned for our next post that will focus on concrete ways to provide feedback to Early Childhood Education teachers.

If this blog has sparked your interest and you want to learn more, check out our book, Actionable Feedback to PK-12 Teachers. And for other suggestions on supervising teachers in literacy, see Chapter 9 by Janice A. Dole, Parker C. Fawson, and D. Ray Reutzel.

Filed Under: News

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • …
  • 14
  • Next Page »

Educational Leadership and Policy Studies

325 Bailey Education Complex
Knoxville, Tennessee 37996

Phone: 865-974-2214
Fax: 865.974.6146

The University of Tennessee, Knoxville
Knoxville, Tennessee 37996
865-974-1000

The flagship campus of the University of Tennessee System and partner in the Tennessee Transfer Pathway.

ADA Privacy Safety Title IX