• Request Info
  • Visit
  • Apply
  • Give
  • Request Info
  • Visit
  • Apply
  • Give

Search

  • A-Z Index
  • Map

Educational Leadership and Policy Studies

  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Programs
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Programs
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
Home » Page 4

To Evaluate, or to Be Evaluated? That is the Question.

To Evaluate, or to Be Evaluated? That is the Question.

To Evaluate, or to Be Evaluated? That is the Question.

June 15, 2024 by Jonah Hall

By M. Andrew Young

Hello! My name is M. Andrew Young. I’m a second-year Ph.D. student in the University of Tennessee, Knoxville Evaluation, Statistics, and Methodology (ESM) program in the Educational Leadership and Policy Studies (ELPS) department. In addition to my educational journey here at the University of Tennessee, I am also a higher education assessment director at East Tennessee State University in their College of Pharmacy. 

Evaluation practices are becoming increasingly utilized across many industries. The American Evaluation Association (AEA) lists general industries on its careers page (Consulting, Education/Teaching/Administration, Government/Civil Service, Healthcare/Health Services, Non-profit/Charity, Other) (Evaluation Jobs – American Evaluation Association, n.d.). A quick Google search also indicates numerous other business-related evaluation opportunities (What Industries Employ Evaluators – Google Search, 2024).  

Why do I bother stating the obvious? Evaluation is everywhere! 

Simple. As an emerging evaluator (this is a whole different discussion, but in short, I’m newer to the field, so I’m “emerging”), it is important to critically reflect upon what it means to be an evaluator as a professional identity. Medical doctors have “M.D.” or “D.O.” degrees, and after an initial licensing process, they have an ongoing licensure examination as well as continuing education and conduct requirements (FSMB | About Physician Licensure, 2024). Physical Engineers have similar requirements such as an initial licensure requirement and continuing education (Maintaining a License, 2024). Pharmacists also have to pass licensure examinations (one national exam, the NAPLEX, and one state-specific exam, the MPJE) and also have continuing education requirements (Pharmacist Licensing Requirements & Service | Harbor Compliance | www.Harborcompliance.Com, 2024). 

Why do they do this? Education helps, experience helps, but why do these few aforementioned professions require licensure and continuing education as part of their right to practice their profession?  

Professional identity is an important function of entrustment given to a profession, and I pose the question: Can licensure with a continuing education requirement support that trust given to evaluators? It may be time to consider to what extent credentialing would support entrustment by those affected by or participating in evaluation activities. 

In the “early days” of the AEA in the 1990’s, the subject of credentialing was broached, and there was such sharp dissent about how to handle this, that AEA pushed back addressing that until their formed their AEA Competency Task Force. In 2015, “The Task Force believed that without AEA agreement about what competencies were essential, it was premature to decide how these competencies would be measured and monitored. Efforts such as the viability and value of adopting a credentialing or assessment system can be the task of working groups that follow ours” (Tucker et al., 2020). 

AEA is in good company without a licensure or credentialing requirement as they follow the example of other major evaluation societies that do not also require or offer credentialing (to my knowledge, only the CES and JES offer this at this point in time) (Altschuld, 1999; Ayoo et al., 2020; Tucker et al., 2020). 

What does it mean to me? 

In a rather pragmatic sense, a credentialing requirement would add a barrier to entry that would protect the economy of evaluation. A continuing education requirement would help make sure that practitioners in evaluation are also keeping current, and a conduct policy would help ensure ethical practice of evaluation. All-in-all, it would hopefully maintain the quality of the profession. While I have not explored how pervasive “bad” evaluation practice is, the more people doing evaluation as it continues to grow as a practice could open the doors for inexperienced and unknowledgeable evaluators to practice. “What about people who are doing some evaluation work for employers but aren’t a ‘professional’ evaluator?” you may ask. Good question. I’ll answer: people who do evaluation work as a part of their private employment would not be required to be licensed or credentialed, but having a license or credential might give them leverage to advance their careers and get compensated commensurate with their abilities. One does not have to be licensed in a software language to use it in the context of employment, but a person with a credential (usually in the form of a certificate embedded in a degree program) in software languages can ask for more compensation because they have demonstrated competence and thereby their employer can give entrustment to them to perform the tasks they will be asked to complete. 

Credentialing has been touted to do much more to the profession than what I listed above, and for that, here is a cool resource to read on this:  

  • Ayoo, S., Wilcox, Y., LaVelle, J. M., Podems, D., & Barrington, G. V. (2020). Grounding the 2018 AEA Evaluator Competencies in the Broader Context of Professionalization. New Directions for Evaluation, 2020(168), 13–30. https://doi.org/10.1002/ev.20440 

What’s the downside? 

Well, like anything, there can be negative implications to credentialing. First, a credentialing body must be formed; second, credentialing requirements must be developed, adopted, and implemented. Then there is the question of what to do with the evaluators already practicing in the field? Then the licensure examination must be maintained. The list goes on, and formalizing the credentialing of evaluators can get very expensive and become a very large endeavor. Pharmacy faced this change when the industry moved from a bachelor’s degree requirement to a PharmD program in 1997. Their solution was to allow BPharm and previously-licensed pharmacists to continue to practice, and the accrediting body allowed colleges of pharmacy to offer two-year “upgrades” from a BPharm to a PharmD program for pre-existing licensed pharmacists (Supapaan et al., 2019).  

Second, and just as important, how do we design and implement a credentialing process that is both equitable and sustainable? 

Conclusion: 

Harkening back to the Shakespearean title, I leave you with this: 

“To evaluate, or to be evaluated, that is the question: 

Whether ‘tis nobler in the mind to suffer the slings and arrows of a burdensome credentialing process, 

Or to take arms against the lack of professional identity, 

And by adopting a credentialing process, end them. 

To credential – to license. 

No more; and by credentialing process to say we end the confusion of ‘who am I?’ and the thousand questions of entrustment that our profession is heir to: ‘tis a consummation devoutly to be wish’d.” 

References: 

Altschuld, J. W. (1999). The Certification of Evaluators: Highlights from a Report Submitted to the Board of Directors of the American Evaluation Association. American Journal of Evaluation, 20(3), 481–493. https://doi.org/10.1177/109821409902000307 

Ayoo, S., Wilcox, Y., LaVelle, J. M., Podems, D., & Barrington, G. V. (2020). Grounding the 2018 AEA Evaluator Competencies in the Broader Context of Professionalization. New Directions for Evaluation, 2020(168), 13–30. https://doi.org/10.1002/ev.20440 

Clarke, P. A. (2009). Leadership, beyond project management. Industrial and Commercial Training, 41(4), 187–194. https://doi.org/10.1108/00197850910962760 

Evaluation Jobs—American Evaluation Association. (n.d.). Retrieved March 23, 2024, from https://careers.eval.org?site_id=22991 

FSMB | About Physician Licensure. (2024). https://www.fsmb.org/u.s.-medical-regulatory-trends-and-actions/guide-to-medical-regulation-in-the-united-states/about-physician-licensure/ 

Gill, S., Kuwahara, R., & Wilce, M. (2016). Through a Culturally Competent Lens: Why the Program Evaluation Standards Matter. Health Promotion Practice, 17(1), 5–8. https://doi.org/10.1177/1524839915616364 

Jarrett, J. B., Berenbrok, L. A., Goliak, K. L., Meyer, S. M., & Shaughnessy, A. F. (2018). Entrustable Professional Activities as a Novel Framework for Pharmacy Education. American Journal of Pharmaceutical Education, 82(5), 6256. https://doi.org/10.5688/ajpe6256 

Kumas-Tan, Z., Beagan, B., Loppie, C., MacLeod, A., & Frank, B. (2007). Measures of Cultural Competence: Examining Hidden Assumptions. Academic Medicine, 82(6). https://journals.lww.com/academicmedicine/fulltext/2007/06000/measures_of_cultural_competence__examining_hidden.5.aspx 

Liphadzi, M., Aigbavboa, C. O., & Thwala, W. D. (2017). A Theoretical Perspective on the Difference between Leadership and Management. Creative Construction Conference 2017, CCC 2017, 19-22 June 2017, Primosten, Croatia, 196, 478–482. https://doi.org/10.1016/j.proeng.2017.07.227 

Maintaining a License. (2024). National Society of Professional Engineers. https://www.nspe.org/resources/licensure/maintaining-license 

Pharmacist Licensing Requirements & Service | Harbor Compliance | www.harborcompliance.com. (2024). https://www.harborcompliance.com/pharmacist-license 

SenGupta, S., Hopson, R., & Thompson-Robinson, M. (2004). Cultural competence in evaluation: An overview. New Directions for Evaluation, 2004(102), 5–19. https://doi.org/10.1002/ev.112 

Supapaan, T., Low, B. Y., Wongpoowarak, P., Moolasarn, S., & Anderson., C. (2019). A transition from the BPharm to the PharmD degree in five selected countries. Pharmacy Practice, 17(3), 1611. https://doi.org/10.18549/PharmPract.2019.3.1611 

Tucker, S. A., Barela, E., Miller, R. L., & Podems, D. R. (2020). The Story of the AEA Competencies Task Force (2015–2018). New Directions for Evaluation, 2020(168), 31–48. https://doi.org/10.1002/ev.20439 

what industries employ evaluators—Google Search. (2024). 

What Is Leadership? | Definition by TechTarget. (n.d.). CIO. Retrieved March 17, 2024, from https://www.techtarget.com/searchcio/definition/leadership 

Filed Under: Evaluation Methodology Blog

Wait, I Can’t Use p < 0.05?

Wait, I Can’t Use p < 0.05?

June 1, 2024 by Jonah Hall

By Jake Working

Introduction 

You might have heard the recent rumblings in the statistics world: null hypothesis significance testing, statistical significance, p-values, our beloved p-value, have been coming into question. Well, the statistical soundness of these methods is not being doubted, but their current use and interpretations in applied research have been. 

How did we get here? Why are interpretations of significance testing and p-values under fire? What does this mean for you, the applied researcher who uses these methods?

The literature surrounding this topic is huge, so I will start to provide some background to these questions in this blog post by including a brief introduction to a few important articles. My name is Jake Working, and I am currently studying for my Ph.D. in Evaluation, Statistics, and Methodology at the University of Tennessee, Knoxville. Let’s learn together.

How Did We Get Here?

Understanding the history of null hypothesis significance testing and p-values is just as important as crafting the future of these analytical methods. In this section, I direct you to check out Lee Kennedy-Shaffer’s article “Before p < 0.05 to Beyond p < 0.05: Using History to Contextualize p-Values and Significance Testing” (2019).

Kennedy-Shaffer reminds us of the history of significance testing and the p-value, noting Sir Ronald Fisher’s popularization of p < 0.05 through historical and contextual lens. Fisher was advancing statistical methodology at the same time as statistic legends such as Karl Pearson (yes, that Pearson) and William Gosset (of Guinness “student” fame), who were all developing uses for significance testing. Fisher formed his suggested p < 0.05 as a simple cut-off of significance in 1925. His reasoning was simple: “p = 0.05, or 1 in 20, is 1.96 or nearly 2…deviations exceeding twice the standard deviation are thus formally regarded as significant” (Fisher, 1925, p. 47 in Kennedy-Shaffer, 2019, p. 84).

Sir Ronald Fisher, circa 1946, thinking about p-values,
from University of Adelaide (source)

Criticisms and alternatives to interpretations to significance testing have existed since the onset of null hypothesis significance testing. These include Neyman-Pearson’s alpha (1933), Bayes’ inverse probability, and Fisher himself even challenged the field against a fixed level of significance (Kennedy-Shaffer, 2019, pp. 85-86). So, what’s the beef with p-values now? 

Laying Down the Law 

As the discussion on p-values and other flaws in statistical reporting seemed to rekindle in the mid-2010s, the American Statistical Association decided to provide the scientific and research community with grounded direction on p-values. In this section, I urge you to read the very short, but impactful “ASA Statement on p-Values: Context, Process, and Purpose” by Ronald Wasserstein and Nicole Lazar (2016).  

They articulated six simple principles on p-values: 

  1. P-values can indicate how incompatible the data are with a specified statistical model 
  1. P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone 
  1. Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a scientific threshold 
  1. Proper inference requires full reporting and transparency 
  1. A p-value, or statistical significance, does not measure the size of an effect or the importance of a result 
  1. By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis 

These principles urge the researcher to contextualize and completely understand their data and analysis methods, making useless bright lines such as p < 0.05. Rosnow and Rosenthal (1989) said it neatly: “…surely, God loves the .06 nearly as much as the .05” (p. 1277). 

Okay, so what do I do now? 

If you are a researcher, student, or just interested in statistical analysis, one thing you can do is to update your analytical habits. Check out this article by Wasserstein, Lazar, and Schirm: “Moving to a World Beyond ‘p < 0.05’” (2019) for context and suggestions. 

Another Ronald, Ron Wasserstein, doing his best Fisher imitation,
from Amstat News (source) 

Included in their article is a mental framework to guide future use of these statistical methods they summarize into two sentences: “Accept uncertainty. Be thoughtful, open, and modest” (Wasserstein et al., 2019, p. 2). Their framework is helpful to set your mental state before delving into the eight pages of action items summarized from 43 different articles on this topic.  

Wasserstein et al. make it an easy read by summarizing each article into actionable bullet points and organizing the suggestions into five topic areas: 

  1. Getting to a Post “p < 0.05” Era 
  1. Interpreting and Using p 
  1. Supplementing or Replacing p 
  1. Adopting More Holistic Approaches 
  1. Reforming Institutions: Changing Publication Policies and Statistical Education 

Call to Action 

As it would be impossible to summarize everything from these articles into one blog post, I urge you to read the three articles in this post. You will better understand p-values and become a better researcher, evaluator, and statistician because of it.  

  1. “Before p < 0.05 to Beyond p < 0.05: Using History to Contextualize p-Values and Significance Testing” (Kennedy-Shaffer, 2019) 
  1. “The ASA Statement on p-Values: Context, Process, and Purpose” (Wasserstein & Lazar, 2016) 
  1. “Moving to a World Beyond p < 0.05” (Wasserstein et al., 2019) 

No need to abandon hypothesis testing and p-values, but be prepared to better understand these tools for what they are: statistical tools. 

References 

Kennedy-Shaffer L. (2019). Before p < 0.05 to Beyond p < 0.05: Using History to Contextualize p-Values and Significance Testing. The American Statistician, 73(Suppl 1), 82–90. https://doi.org/10.1080/00031305.2018.1537891  

Rosnow, R.L. & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44, 1276-1284. 

Wasserstein, R. L., & Lazar, N. A. (2016). The ASA Statement on p-Values: Context, Process, and Purpose. The American Statistician, 70(2), 129-133. https://doi.org/10.1080/00031305.2016.1154108  

Wasserstein, R. L., Schirm, A. L., & Lazar, N. A. (2019). Moving to a World Beyond “p< 0.05”. The American Statistician, 73(sup1), 1-19. https://doi.org/10.1080/00031305.2019.1583913

Filed Under: Evaluation Methodology Blog

Leadership Studies Program Holds 2024 Awards Ceremony Senior Toast

Leadership Studies Program Holds 2024 Awards Ceremony Senior Toast

May 17, 2024 by Jonah Hall

The Leadership Studies program held its Senior Toast and Awards Ceremony last night where we celebrated our forty-four (44) 2023-24 graduates. Annually, our graduates lead a Capstone project as their culminating experience in the minor, with the most exceptional being awarded a medal. We selected Tyler Johnson’s project “Addressing the Mental Health of IFC” and Amara Pappas’ “Musical Theatre Rehearsal Project and Major” as this cohort’s Self-Directed and Faculty-Initiated Capstones of the Year. Elle Caldow’s, Kyle Stork’s Margaret Priest’s, Devon Thompson’s, Jane Carson Wade’s exceptional Capstones also earned Honorable Mentions. Erin McKee earned her Leadership Studies Engaged Community Scholar Medal and Grace Woodside the Zanoni Award for contribution to the Leadership Studies Academic Community. We also recognized Dr. Sean Basso as our Faculty Member of the Year and ELPS’ own Diamond Leonard as our Staff Member of the Year.

The highlight of the evening is the induction of graduates, faculty, and staff to the Tri-Star Society. The 2024 Class of the Tri-Star Society is: Brody Carmack, Mackenzie Galloway, Tyler Johnson, Erin Mckee, Alay Mistry, McKaylee Mix, Amara Pappas, Devon Thompson, Mikele Vickers, Kendall William, and Grace Woodside. These leaders truly distinguished themselves as Leaders of Leaders with exceptional potential for continued leadership within our state, as demonstrated by their time at the University and in our community. Undergraduate Leadership Studies celebrates each of our graduates, all they have and will accomplish, and those in the UT community that contributed to their success.

Filed Under: News

Supporting Early Childhood Teacher Growth and Development with Actionable Context-Based Feedback

Supporting Early Childhood Teacher Growth and Development with Actionable Context-Based Feedback

May 17, 2024 by Jonah Hall

By Dr. Mary Lynne Derrington & Dr. Alyson Lavigne

Please Note: This is the final part of a four-part series on actionable feedback. Stay tuned for the next posts that will focus on Leadership Content Knowledge (LCK) and teacher feedback in the areas of STEM, Literacy, and Early Childhood Education.

Missed the beginning of the series? Click here to read Part 1
on making teacher feedback count!

According to the Center on the Developing Child at Harvard University, in the first few years of life more than 1 million new neural connections form every second. With children experiencing rapid brain development from birth to age 8, early childhood education and the experiences in those settings are critical for building a foundation of lifelong learning and success. Thus, supporting the educators who teach these early learners is perhaps one of the best educational investments that any school leader can make. 

One way leaders support teachers is to observe classrooms and provide feedback. Maria Boeke Mongillo and Kristine Reed Woleck argue that those who observe and provide feedback to early childhood educators can leverage leadership content knowledge—knowledge about the principles of early childhood education—and apply that knowledge to the observation cycle and the context in which early childhood educators work.

To apply leadership content knowledge, school leaders should first be familiar with the National Association for the Education of Young Children (NAEYC) and their principles of child development and learning, most recently published in 2020. The list of nine principles includes the essential element of play for promoting joyful learning to foster self-regulation, language, cognitive and social competencies, and content knowledge across disciplines.

Once a school leader is familiar with NAEYC’s nine principles, they may consider applying early childhood-informed “look-fors” during an observation with related questions that can be used during a pre- or post-observation conference. Examples by area of teaching are provided below.

Learning Environment

Look-Fors

  • Flexible seating and work spaces allow for collaboration and social skills and language development
  • Physical space and furniture allow for movement and motor breaks

Pre- or Post-Observation Questions

  • How do your classroom environment and materials support your student learning outcomes?
Instructional Practices

Look-Fors

  • Opportunities for learning and embedded in play and collaborative experiences

Pre- or Post-Observation Questions

  • How might your students consolidate and extend their learning in this lesson through play opportunities?
Assessment

Look-Fors

  • Use of observations and interviews to assess student learning.

Pre- or Post-Observation Questions

  • What methods are you using to collect information about student learning during this lesson?

These systematic structures can be applied to make sure that observation and importantly, feedback to early childhood educators is meaningful and relevant. 

This blog entry is the last entry as part of a four-part series on actionable feedback. 

If this blog has sparked your interest and you want to learn more, check out our book, Actionable Feedback to PK-12 Teachers. And for other suggestions on supervising teachers in early childhood, see Chapter 10 by Maria Boeke Mongillo and Kristine Reed Woleck.

Missed the beginning of the series? Click here to read the first, second and third blog posts. 

Filed Under: News

Reflecting on a Decade After ESM: My Continuing Journey as an Evaluation Practitioner and Scholar

Reflecting on a Decade After ESM: My Continuing Journey as an Evaluation Practitioner and Scholar

May 15, 2024 by Jonah Hall

By Tiffany Tovey, Ph.D.

Greetings, fellow explorers of evaluation! I’m Tiffany Tovey, a fellow nerd​,​​ ​UTK alum​,​ and practitioner on a constantly evolving professional and personal journey, navigating the waters with a compass called reflective practice. Today, I’m thrilled to reflect together with you on the twists and turns of my journey as an evaluation practitioner and scholar in the decade since I defended my dissertation and offer some insights for you to consider in your own work. 

My Journey in Evaluation 

Beginning the unlearning process. The seeds of my journey into social science research were sown during my undergraduate years as a first-generation college student at UTK, where I pursued both philosophy and psychology for my bachelor’s degree. While learning about the great philosophical debates and thinkers, I was traditionally trained in experimental and social psychology under the mentorship of Dr. Michael Olson. This rigorous foundation of exploring knowledge and inquiry provided me with a foundational perspective on what was to come. I learned the importance of asking questions, embracing fallibilism, and appreciating the depth of what I now call reflective practice. Little did I know, this foundation really set the stage for my immersion into the world of evaluation, starting with the Evaluation, Statistics, and Methodology (ESM) program at UTK.  

Upon entering ESM for my Ph.D., I found myself in the messy, complex, and dynamic realm of applying theory to practice. Here, my classical training in positivist, certainty-oriented assumptions was immediately challenged (in ways I am still unlearning to this day), and my interests in human behavior and reflective inquiry found a new, more nuanced, context-oriented environment to thrive. Let me tell you about lessons I learned from three key people along the way: 

  • Communicating Data/Information: Statistics are tools for effectively communicating about and reflecting on what we know about​​ what is and (sometimes) why it is the way it is. Dr. Jennifer Ann Morrow played a pivotal role in shaping my understanding of statistics and its application in evaluation. Her emphasis on making complex statistical information accessible and meaningful to students, clients, and other audiences has stuck with me.  
     
    ​​As important as statistics are, so too are words—people’s lived experiences, which is why qualitative research is SO important in our work, something that all my instructors helped to instill in me in ESM. I can’t help it; I’m a word nerd. Whether qualitative or quantitative, demystifying concepts, constructs, and contexts, ​​outsmarting software and data analysis programs, and digesting and interpreting information in a way that our busy listeners can understand and make use of is a fundamental part of our jobs. 
  • Considering Politics a​​nd Evaluation Use: Under the mentorship of Dr. Gary Skolits, a retired ESM faculty member and current adjunct faculty member at UTK, I began to understand the intricate dances evaluators navigate in the realms of politics and the use of evaluation findings. His real-talk and guidance helped prepare me for the complexities of reflective practice in evaluation, which became the focus of my dissertation. Upon reflection, I see my dissertation work as a continuation of the reflective journey I began in my undergraduate studies, and my work with Gary as a fine-tuning and ​clarification of​ the critical role of self-awareness, collaboration, facilitation, and tact in the evaluation process. 
  • The Key Ingredient – Collaborative Reflective Practice: My journey was deepened by my engagement with Dr. John Peters, another now-retired faculty member from UTK’s College of Education Health and Human Sciences, who introduced me to the value of collaborative reflective practice through dialogue and systematic reflective processes. His teachings seeded my belief that evaluators should ​​facilitate reflective experiences for clients and collaborators, fostering deeper understandings, cocreated learning, and more meaningful outcomes (see the quote by John himself below… and think about the ongoing role of the evaluator during the lifecycle of a project). He illuminated the critical importance of connecting theory to practice through reflective practice—a transformative activity that occupies the liminal space between past actions and future possibilities. This approach encourages us to critically examine the complexities of practice, thereby directly challenging the uncritical acceptance of the status quo. 

My post-PhD journey. I currently serve as the director of the Office of Assessment, Evaluation, and Research Services and teach program evaluation, qualitative methods, reflective practice, interpersonal skills, and just-in-time applied research skills to graduate and undergraduate students at UNC Greensboro. Here, I apply my theoretical knowledge to real-world evaluation projects, managing graduate students and leading them on their professional evaluation learning journey. Each project and collaboration has been an opportunity to apply and refine my understanding of reflective practice, effective communication, and the transformative power of evaluation.  

​​​My role at UNCG has been a continued testament to the importance of reflective practice. The need for intentional reflective experiences runs throughout my role as a director of OAERS, lead evaluator and research on sponsored projects, mentorship and scaffolding with students, and as a teacher. Building in structured time to think, unpack questions and decisions together, and learn how to go on more wisely is a ubiquitous need. Making space for reflective practice means leveraging the ongoing learning and unlearning process that defines the contours of (1) evaluation practice, (2) evaluation scholarship, and (3) let’s be honest… life itself!  

Engaging with Others: The Heart of Evaluation Practice 

As evaluators, our work is inherently collaborative and human centered. We engage with diverse collaborators and audiences, each bringing their unique perspectives and experiences to the table. In this complex interplay of voices,​​ it’s essential that we—evaluators—foster authentic encounters that lead to meaningful insights and outcomes. 

In the spirit of Martin Buber’s philosophy, I try to approach my interactions with an open heart and mind, seeking to establish a genuine connection with those I work with. Buber reminds us that “in genuine dialogue, each of the participants really has in mind the other or others in their present and particular being and turns to them with the intention of establishing a living mutual relation between himself and them” (Buber, 1965, p. 22). This perspective is foundational to my practice, as it emphasizes the importance of mutual respect and understanding in creating a space for collaborative inquiry and growth. 

Furthermore, embracing a commitment to social justice is integral to my work as an evaluator. Paulo Freire’s insights resonate deeply with me:  

Dialogue cannot exist, however, in the absence of a profound love for the world and for people. The naming of the world, which is an act of creation and re-creation, is not possible if it is not infused with love. Love is at the same time the foundation of dialogue and dialogue itself. (Freire, 2000, p. 90) 

This principle guides me in approaching each evaluation project with a sense of empathy and a dedication to promoting equity and empowerment through my work. 

Advice for Emerging Evaluators 

  • Dive in and embrace the learning opportunities that come your way.  
  • Reflect on your experiences and be honest with yourself.  
  • Remember, evaluation is about people and contexts, not just techniques and tools.  
  • Leverage your unique personality and lived experience in your work. 
  • Never underestimate the power of effective, authentic communication… and networking. 
  • Most importantly, listen to and attend to others—we are a human-serving profession geared towards social betterment. Be in dialogue with your surroundings and those you are in collaboration with. View evaluation as a reflective practice, and your role as a facilitator of that process. Consider how you can leverage the perspectives of Buber and Freire in your own practice to foster authentic encounters and center social justice in your work. 

Conclusion and Invitation 

My journey as an evaluation scholar is a journey of continuous learning, reflection, and growth. As I look to the future, I see evaluation as a critical tool for navigating the complex challenges of our world, grounded in reflective practice and a commitment to the public good. To my fellow evaluators, both seasoned and emerging, let’s embrace the challenges and opportunities ahead with open minds and reflective hearts. And to the ESM family at UTK, know that I am just an email away (tlsmi32@uncg.edu), always eager to connect, share insights, and reflect further with ​​you. 

Filed Under: Evaluation Methodology Blog

How Do I Critically Consume Quantitative Research?

How Do I Critically Consume Quantitative Research?

May 1, 2024 by Jonah Hall

By Austin Boyd 

Every measurement, evaluation, statistics, and assessment (MESA) professional, whether they ​are​ established educators and practitioners or aspiring students, engages with academic literature in some capacity. Sometimes for work, ​​other times for pleasure, but always in the pursuit of new knowledge. But how do we as consumers of research determine whether the quantitative research we engage with is high quality? 

My name is Austin Boyd, and I am a researcher, instructor, and ESM alumni. ​​I have read my fair share of articles over the past decade and was fortunate enough to publish a few of my own. I have read articles in the ​​natural, formal, applied, and social sciences, and while they all shared the title of peer-reviewed publication, there was definitely variability in the quality of quantitative research from one article to the next. Initially, it was difficult for me to even consider the idea that a peer-reviewed publication would be anything less than perfect. However, as I have grown as a critical consumer of research, I have devised six questions to keep in mind when reading articles with quantitative analyses that allow me to remain objective in the face of exciting results. 

  1. ​​     ​What is the purpose of the article?  

The first question to keep in mind when reading an article is​,​ “​W​hat is its purpose?” Articles may state these in the form of research questions or even in the title by using words such as “empirical”, “validation”, and “meta-analysis”. While the purpose of an article has no bearing on its quality, it does impact the type of information a reader should expect to obtain from it. Do the research questions indicate that the article will be presenting new exploratory research on a new phenomenon or attempting to validate previous research findings? Remaining aware of the article’s purpose allows you to determine if the information is relevant and in the scope of what it should be providing. 

  1. What information is provided about obtaining participants and about the participants themselves? 

The backbone of quantitative research is data. In order to have any data, participants or cases must be found and measured for the phenomena of interest. These participants are all unique, and it is this uniqueness that needs to be disclosed to the reader. Information on the population of interest, how the selected participants were recruited, who they are, and why their results were or were not included in the analysis is essential for understanding the c​​ontext of the research. Beyond the article itself, the demographics of the participants are also important for planning future research. While research participants are largely Western, educated, industrialized, rich, and democratic societies (​​WEIRD; Henrich et al., 2010), it should not be assumed that this is the case for all research. The author(s) of an article should disclose demographic information of the participants, so the readers understand the context of the data and the generalizability of the results, and so that researchers can accurately replicate or expand the research to ne​​w contexts. 

  1. Do the analyses used make sense for the data and proposed research question(s)? 

In order to obtain results from the quantitative data collected, some form of analysis must be conducted. The most basic methods of exploring quantitative data are called statistics (Sheard, 2018). T​​he selected statistical analysis should align with the variables presented in the article and answer the research question(s) guiding the project. There is a level of subjectivity as to which statistical analysis should be used to analyze data. Variables measured on a nominal scale should not be used as the outcome variable when conducting analyses that look at the differences between group means, such as ​​t-tests and ANOVAs, while ratio scale variables should not be used to conduct analyses dealing with frequency distributions, such as chi-square tests. However, there are analyses which require the same variable types, making them seemingly interchangeable. For example, t-tests, logistic regressions, and point biserial analyses all use two variables, one continuous and one binary. However, each of these analyses addresses different research questions such as “Is there a difference between groups?”, “Can we predict an outcome?”, and “Is there a relationship between variables?”. While there is a level of subjectivity as to which statistical analysis can be used to analyze data, there are objectively incorrect analyses based on both the overarching research questions and the scale of measurement of the available variables in the data.  

  1. What results are provided? 

While a seemingly straightforward question, there is a lot of information that can be provided about a given analysis. The most basic, and least informative, is a blanket statement about the statistical significance. Even if there is no statistically significant result to report, a blanket statement is not sufficient information about the analyses with all the different values that can be reported for each analysis. For example, a t-test has a t value, degrees of freedom, p value, confidence interval, power level, and effect size, all of which provide valuable information about the results. While having some of these values does allow the reader to calculate the missing ones, the onus should not be put on the reader to do so (Cohen, 1990). Additionally, depending on the type of statistical analysis chosen, additional tests of the data must be conducted to determine if the data meets the assumptions necessary for the analysis. The results of these tests of assumptions and the decisions made based on them should be reported and supported by the existing literature. 

  1. Is there any discussion of limitations? 

Almost every article has limitations in some form or other, which should be made known to the reader. If an article didn’t have any limitations, the author would make a point to state as much. Limitations include limits to the generalizability of the findings, confounding variables, or simply time constraints. While these might seem negative, they are not immediate reasons to discredit an article entirely. As was the case for the demographics, the limitations provide further context about the research. They can even be useful in providing direction for follow-up studies in the same way a future research section would.  

  1. Do you find yourself still having questions after finishing the article?  

The final question to keep in mind once you have finished reading an article is “Do you still have questions?” At the end of an article, you shouldn’t find yourself needing more information about the study. You might want to know more about the topic or similar research, but you shouldn’t be left wondering about pieces of the research design or other methodological aspects of the study. High-quality research deserves an equally high-quality article, which includes ample information about every aspect of the study. 

While not an exhaustive list, these six questions are designed to provide a starting point for determining if research with quantitative data is of high quality. Not all research is peer-reviewed, including conference presentations, blog posts, and white papers, and simply being peer-reviewed does not make a publication infallible. It is important to understand how to critically consume research in order to successfully navigate the ever-expanding body of scientific research. 

​​​​     ​Additional Resources: 

https://blogs.lse.ac.uk/impactofsocialsciences/2016/05/09/how-to-read-and-understand-a-scientific-paper-a-guide-for-non-scientists/  

https://statmodeling.stat.columbia.edu/2021/06/16/wow-just-wow-if-you-think-psychological-science-as-bad-in-the-2010-2015-era-you-cant-imagine-how-bad-it-was-back-in-1999/ 

https://totalinternalreflectionblog.com/2018/05/21/check-the-technique-a-short-guide-to-critical-reading-of-scientific-papers/ 

https://undsci.berkeley.edu/understanding-science-101/how-science-works/scrutinizing-science-peer-review/ 

https://www.linkedin.com/pulse/critical-consumers-scientific-literature-researchers-patients-savitz/ 

​​     ​References: 

Cohen, J. (1990). Things I have learned (So Far). The American Psychologist, 45(12), 1304–1312. DOI: 10.1037/0003-066X.45.12.1304 

Henrich, J., Heine, S., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33 (2-3), 61-83 DOI: 10.1017/S0140525X0999152X 

Sheard, J. (2018). Chapter 18 – Quantitative data analysis. In K. Williamson & G. Johanson (Eds.), Research Methods (2nd ed., pp. 429-452). Chandos Publishing. DOI: 10.1016/B978-0-08-102220-7.00018-2   

Filed Under: Evaluation Methodology Blog

Engaging Students in Online, Asynchronous Courses: Strategies for Success

Engaging Students in Online, Asynchronous Courses: Strategies for Success

April 15, 2024 by Jonah Hall

By S. Nicole Jones, Ph.D. 

Hello! My name is Nicole Jones, and I am a 2022 graduate of the Evaluation, Statistics, and Methodology (ESM) PhD program at the University of Tennessee, Knoxville (UTK). I currently work as the Assessment & Accreditation Coordinator in the College of Veterinary Medicine (CVM) at the University of Georgia (UGA). I also teach online, asynchronous program evaluation classes for UTK’s Evaluation, Statistics, & Methodology PhD and Evaluation Methodology MS programs. My research interests include the use of artificial intelligence in evaluation and assessment, competency-based assessment, and outcomes assessment. 

Prior to teaching part-time for UTK, I served as a graduate teaching assistant in two online, synchronous ESM classes while enrolled in the PhD program: Educational Research and Survey Research. In addition, I taught in-person first-year seminars to undergraduates for many years in my previous academic advising roles. However, it wasn’t until I became involved in a teaching certificate program offered by UGA’s CVM this year that I truly began to reflect more on my own teaching style, and explore ways to better engage students, especially in an online, asynchronous environment. For those who are new to teaching online classes or just need some new ideas, I thought it would be helpful to share what I’ve learned about engaging students online.  

Online Learning 

While many online courses meet synchronously, meaning they meet virtually at a scheduled time through platforms like Zoom or other Learning Management Software (LMS) tools, there are also online classes that have no scheduled meeting times or live interactions. These classes are considered to be asynchronous. If you have taken an online, asynchronous course, you likely already know that it can be easy to forget about the class, primarily because there is no scheduled class time to keep you on track. When I worked as an academic advisor, I would often encourage my students who registered for these types of courses to go ahead and set aside certain days or times of the week to devote to those classes. Many college students struggle with time management, especially in the first-year, so this was one way to help them stay engaged in the class and up to date with assignments. While it is certainly important for students to show up (or log in) and participate, it’s even more important for instructors to create an online environment that will motivate students to do so. As discussed by Conrad and Donaldson (2012), online engagement is related to student participation and interaction in the classroom, and learning in the classroom (online or in-person) rests upon the instructor’s ability to create a sense of presence and engage students in the learning process. The key to engaging online learners is for students to be engaged and supported so they take responsibility for their own learning (Conrad & Donaldson, 2012). So, how might you create an engaging online environment for students?  

Engaging Students in Online Classes 

Below are some strategies I currently use to engage students in my online, asynchronous program evaluation classes:  

  • Reach out to the students prior to the start of class via welcome email 
  • Post information about myself via an introduction post – also have students introduce themselves via discussion posts 
  • Develop a communication plan – let students know the best way to get in touch with me 
  • Host weekly virtual office hours – poll students about their availability to find the best time 
  • Clearly organize the course content by weekly modules 
  • Create a weekly checklist and/or introduction to each module 
  • Use the course announcements feature to send out reminders of assignment due dates  
  • Connect course content to campus activities, workshops, events, etc.  
  • Utilize team-based projects 
  • Provide opportunities for students to reflect on learning (i.e., weekly reflection journals) 
  • Provide feedback on assignments in a timely manner 
  • Allow for flexibility and leniency  
  • Reach out to students who miss assignment due dates – offer to meet one-on-one if needed 

In addition to these strategies, the Center for Teaching and Learning at Northern Illinois University has an excellent website with even more recommendations for increasing student engagement in online courses. Their recommendations focus on the following areas: 1) set expectations and model engagement, 2) build engagement and motivation with course content and activities, 3) initiate interaction and create faculty presence, 4) foster interaction between students and create a learning community, and 5) create an inclusive environment. I also recommend checking your current institution’s Center for Teaching and Learning to see if they have tips or suggestions as they may be more specific for the LMS your institution uses. Lastly, you may find the following resources helpful if you wish to learn more about student engagement and online teaching and learning. 

Helpful Resources 

American Journal of Distance Education: https://www.tandfonline.com/toc/hajd20/current  

Fostering Connection in Hybrid & Online Formats:
https://www.ctl.uga.edu/_resources/documents/Fostering-Connection-in-Hybrid-Online-Formats.pdf  

Conrad, R. M., & Donaldson, J. A. (2012). Continuing to Engage the online Learner: More Activities and Resources for Creative Instruction. San Francisco, CA: Jossey Bass.  

Groccia, J. E. (2018). What is student engagement? New Directions for Teaching and Learning, 154, 11-20.  

How to Make Your Teaching More Engaging: Advice Guide 

https://www.chronicle.com/article/how-to-make-your-teaching-more-engaging/?utm_source=Iterable&utm_medium=email&utm_campaign=campaign_3030574_nl_Academe-Today_date_20211015&cid=at&source=ams&sourceid=&cid2=gen_login_refresh 

 How to Make Your Teaching More Inclusive:  

https://www.chronicle.com/article/how-to-make-your-teaching-more-inclusive/ 

Iowa State University Center for Excellence in Learning and Teaching: https://www.celt.iastate.edu/learning-technologies/engaging-students/ 

Khan, A., Egbue, O., Palkie, B., & Madden, J. (2017). Active learning: Engaging students to maximize learning in an online course. The Electronic Journal of e-Learning, 15(2), 107-115. 

Lumpkin, A. (2021). Online teaching: Pedagogical practices for engaging students synchronously and asynchronously. College Student Journal, 55(2), 195-207. 

Northern Illinois University Center for Teaching and Learning. (2024, March 1). Recommendations to Increase Student Engagement in Online Classes. https://www.niu.edu/citl/resources/guides/increase-student-engagement-in-online-courses.shtml.   

Online Learning Consortium: https://onlinelearningconsortium.org/read/olc-online-learning-journal/  

Watson, S., Sullivan, D. P., & Watson, K. (2023). Teaching presence in asynchronous online classes: It’s not just a façade. Online Learning, 27(2), 288-303. 

Filed Under: Evaluation Methodology Blog

Careers in Program Evaluation: Finding and Applying for a Job as a Program Evaluator

Careers in Program Evaluation: Finding and Applying for a Job as a Program Evaluator

April 1, 2024 by Jonah Hall

By Jennifer Ann Morrow, Ph.D. 

Introduction: 

Hi! My name is Jennifer Ann Morrow and I’m an Associate Professor in Evaluation Statistics and Methodology at the University of Tennessee-Knoxville. I have been training emerging assessment and evaluation professionals for the past 22 years. My main research areas are training emerging assessment and evaluation professionals, higher education assessment and evaluation, and college student development. My favorite classes to teach are survey research, educational assessment, program evaluation, and statistics. 

What’s Out There for Program Evaluators? 

What kind of jobs are out there for program evaluators? What organizations hire program evaluators? Where should I start my job search? What should I submit with my job application? These are typical questions my students ask me as they are considering joining the evaluation job market. Searching for a job can be overwhelming and with so many resources and websites available it can be easy to get lost within all of the information when searching for a job. Here are some strategies that I share with my students as I help them navigate the program evaluation job market, I hope you find them helpful! 

First, I ask the student to describe their skills/competencies that they have and what types of skills they believe they are strong in (and hopefully enjoy doing!). In our program we use the American Evaluation Association Competencies (https://www.eval.org/About/Competencies-Standards/AEA-Evaluator-Competencies) in a self-assessment where we have students rate how confident they are in their ability to perform each competency. We have them rate themselves and provide strategies for how to remedy deficiencies each year that they are in our program. Conducting a self-assessment of your skills/competencies and strengths and weaknesses is a great way to help figure out what types of jobs best fit your skillset. It is also helpful when crafting a cover letter! Check out the resources for additional examples of self-assessments! 

Second, I have students create/update their curriculum vita (CV) and resume. Depending on the jobs that they plan on applying for they may need a CV or a resume. I tell them to use the information from their skills self-assessment and their graduate program of study to craft their CV/resume. I also have them develop a general cover letter (these should be tailored for each specific job) that showcases their experience, skills, and relevant work products. There are a ton of resources available online (see some listed below) and I share with them example CVs/resumes and cover letters from some of our graduates. I also encourage them to get feedback on these from faculty and peers before using them in a job application. 

Third, I encourage students to develop (or clean up current ones) a social media presence. I highly recommend creating a LinkedIn profile (My LinkedIn Profile). Make sure on your profile that you showcase your skills, education, experiences and make connections with others in the Program Evaluation field. LinkedIn is also a great place to search for evaluation jobs! I also recommend to students to create an academic website (Dr. Rocconi’s Website). On your website you go into more detail about your experiences, share work products (e.g., publications, presentations, evaluation reports). Make sure you put your LinkedIn and website links at the top of your CV/resume! 

Fourth, I provide my students tips for where and how to search for program evaluation jobs. I encourage them to draft relevant search terms (e.g., program evaluator, evaluation specialist, program analyst, data analyst) and make a list of job sites (see resources for some of my favorites!) that you are going to use to search for jobs. For a lot of these job sites you can search for key terms, job title, location, salary, etc. to help narrow down the results. Also, for many of these job sites you can sign up for job alerts based on your search terms where they will send you an email when a new job fits your search terms. I also encourage students to join their major professional organizations (e.g., AEA) and sign up for their newsletter or listserv as many job opportunities are posted there. 

Lastly, I tell students to create an organized job search plan. I typically do this in Excel but you can organize your information in a variety of formats and platforms. I create an Excel file that contains all of the jobs that I apply for (i.e., name of organization, link to job ad, contact information, date applied) and a list of when/where I am searching for job. When I was actively searching for jobs I dedicated time each week to go through listserv emails and search job sites for relevant jobs to apply for. I then updated my excel file each week during my search. It helps to keep things organized in case you need to follow-up with organizations regarding the status of your application. 

So, good luck on your job search and I hope that my tips and resources are helpful as you start your journey to becoming a program evaluator! 

 

Resources 

American Evaluation Association Competencies: https://www.eval.org/About/Competencies-Standards/AEA-Evaluator-Competencies  

Article about How to Become a Program Evaluator: https://www.evalcommunity.com/careers/program-evaluator/ 

Article about Program Evaluation Careers: https://money.usnews.com/money/careers/articles/2008/12/11/best-kept-secret-career-program-evaluator 

Article about Program Evaluation Jobs: https://www.evalcommunity.com/job-search/program-evaluation-jobs/ 

Creating a LinkedIn Profile: https://blog.hubspot.com/marketing/linkedin-profile-perfection-cheat-sheet  

Creating an Academic Website: https://theacademicdesigner.com/2023/how-to-make-an-academic-website/  

Evaluator Competencies Assessment: https://www.khulisa.com/wp-content/uploads/sites/9/2021/02/2020-Evaluator-Competencies-Assessment-Tool-ECAT_Final_2020.07.27.pdf  

Evaluator Qualities: https://www.betterevaluation.org/frameworks-guides/managers-guide-evaluation/scope-evaluation/determine-evaluator-qualities 

Evaluator Self-Assessment: https://www.cdc.gov/evaluation/tools/self_assessment/evaluatorselfassessment.pdf  

Program Evaluation Curriculum Vita Tips: https://wmich.edu/sites/default/files/attachments/u1158/2021/Showcasing%20Your%20Eval%20Competencies%20in%20Your%20Resume%20or%20Vita%20for%20PDF.pdf  

Program Evaluation Resume Tips: https://www.zippia.com/program-evaluator-jobs/skills/#  

Resume and CVs Resources: https://www.careereducation.columbia.edu/topics/resumes-cvs  

Resume and Job Application Resources: https://academicguides.waldenu.edu/careerservicescenter/resumesandmore  

Six C’s of a Good Evaluator: https://www.evalacademy.com/articles/2019/9/26/what-makes-a-good-evaluator  

UTK’s Evaluation Methodology MS program (distance ed): https://volsonline.utk.edu/programs-degrees/education-evaluation-methodology-ms/ 

AAPOR Jobs: https://jobs.aapor.org/jobs/?append=1&quick=industry%7Csurvey&jas=3 

American Evaluation Association Job Bank: https://careers.eval.org/ 

Evaluation Jobs: https://evaluationjobs.org/ 

Higher Ed Jobs: https://www.higheredjobs.com/ 

Indeed.com: https://www.indeed.com/ 

Monitoring and Evaluation Career Website: https://www.evalcommunity.com/ 

NCME Career Center: https://www.ncme.org/community/community-network2/careercenter 

USA Government Job Website: https://www.usajobs.gov/ 

 

Filed Under: Evaluation Methodology Blog

Brian Mells Recognized as Field Award Recipient

Brian Mells Recognized as Field Award Recipient

March 25, 2024 by Jonah Hall

Dr. Brian Mells, Principal at Whites Creek High School in Metro Nashville Public Schools named as recipient of William J. and Lucille H. Field Award for Excellence in Secondary Principalship for the State of Tennessee.

The Field Award was established to recognize one outstanding secondary school leader each year who demonstrates leadership excellence through commitment to the values of civility, candor, courage, social justice, responsibility, compassion, community, persistence, service, and excellence. Administered by the College of Education, Health, and Human Sciences at the University of Tennessee, the Field Award identifies a Tennessee secondary school principal whose life and work are characterized by leadership excellence and encourages secondary school principals to pause and reflect upon their current leadership practice and to consider their experience, challenges, and opportunities in light of the personal values that they embody. 

 

The Field Award recipient for this year is Dr. Brian Mells. Principal of Whites Creek High School in Metro Nashville Public Schools. A secondary principal since 2016, Dr. Mells holds a Bachelor’s degree from The University of Tennessee, a Master’s from Tevecca Nazarene University, an EdS and an EdD from Carson-Newman University.  During Dr. Mellls’ tenure at Whites Creek High School, he has led his campus to excellence by supporting academic rigor and student achievement, and by strengthening positive relationships with all stakeholders. Dr. Mells is an exceptional school leader who has taken the initiative to implement numerous programs on his campus, inspire instructional innovation, and improve student achievement. Dr. Mells stated that his “core belief of [his] leadership is that all students can achieve and grow academically, socially, and emotionally, when the appropriate systems and structures are in place for them to be successful. 

Dr. Mells is an innovative school leader who is passionate about developing collective efficacy and collective accountability among his faculty and staff to ensure that they achieve excellence for all stakeholders. Under Dr. Mells’ leadership, Whites Creek High School was able to increase all academic achievement outcomes for all students and earn an overall composite TVAAS of Level 5 for the first time in school history and has maintained that status for the past two years. Dr. Mells was nominated for the Field Award by MNPS superintendent, Adrienne Battle and endorsed by the Chief of Innovation, Renita Perry. Perry commented, “Dr. Mells is an innovative school leader who is passionate about developing collective efficacy and collective accountability among his faculty and staff to ensure that they achieve excellence for all stakeholders.” The department of Educational Leadership and Policy Studies at the University of Tennessee is proud to name Dr. Mells as this year’s Field Award Winner. Congratulations, Dr. Brian Mells! 

Filed Under: News

How My Dissertation Came to be through ESM’s Support and Guidance

How My Dissertation Came to be through ESM’s Support and Guidance

March 15, 2024 by Jonah Hall

By Meredith Massey, Ph.D. 

Who I am

Greetings! I’m Dr. Meredith Massey. I finished my PhD in Evaluation, Statistics, and Methodology (ESM) at UTK in the Fall of 2023. In addition to my PhD in ESM, I also completed graduate certificates in Women, Gender, and Sexuality and Qualitative Research Methods in Education. While I was a part-time graduate student, I also worked full-time as an evaluation associate at Synergy Evaluation Institute, a university-based evaluation center. By day, I worked for clients evaluating their STEM education and outreach programs. By night, I was an emerging scholar in ESM. During my time in the program,      my research interests grew to include andragogical issues in applied research methods courses, classroom measurement and assessment, feminist research methods, and evaluation.

How my dissertation came to be

In the ESM program, students can choose to complete a three-manuscript dissertation rather than a traditional five-chapter dissertation. When it came time to start deciding what my dissertation would look like, my faculty advisor, Dr. Leia Cain, suggested I consider the three-manuscript option. As someone who has varied interests, this idea appealed to me because it allowed me the flexibility to work on three separate but related studies. My dissertation flowed from a research internship that I completed with Dr. Cain. I interviewed qualitative faculty about their assessment beliefs and practices within their qualitative methods courses. I wrote a journal article on that study to serve as my comprehensive exam writing requirement. Using my original internship study as the basis for my first dissertation manuscript was an expedient strategy as it allowed me to structure my second and third manuscripts on the findings of my first study. I presented my ideas for my second and third manuscripts to my committee in my dissertation proposal, accepted their feedback on how to proceed with my studies and then got to work.

Dissertation topic and results

In my multi-paper dissertation entitled “Interviews, rubrics and stories (Oh my!): My journey through a three-manuscript dissertation,” I chose to center faculty and students’ perspectives on assessment and learning. To that end, my first and second research studies both focused on those two issues, while the third paper went further into exploring the students’ perspective through my story of the parallel formations of my scholarly identity and my new identity as a part of a married couple. In the first study, “Interviewing the Interviewers: How qualitative faculty assess interviews,” I reported how faculty use interview assignments in their courses and how they engage with assessment tools such as rubrics for those interview assignments. We learned that the faculty view interview assignments as the best and most comprehensive assignment their students can complete to give them experience as qualitative researchers. Regarding assessment tools such as rubrics, while instructors had differing opinions on whether rubrics were an appropriate tool to use in their assessment practices, all the instructors believed that giving students feedback was an essential assessment practice. My findings in that manuscript helped shape the plan to implement the second study. In “I can do qualitative research: Using student-designed rubrics to teach interviewing,” I detailed testing out an innovative student-created rubric for an interview assignment in an introductory qualitative research methods course and used student reflections as the basis for writing an ethnodrama about how students experience their first interview assignments and how they engaged with their rubric. From this study, we learned that students grew in their confidence in conducting interviews, experienced a transformation in their paradigm, and were conflicted about using the student-designed rubric in that some students found it useful, and some did not. Both manuscripts informed my third manuscript, an autoethnography detailing the parallel transitions in my identity from an evaluator to a scholar and my identity from a single person to a married person. I wrote interweaving stories chronicling the parallels between the similar and contrasting methods I use as an evaluator and researcher, how this tied into my growing identity as a scholar, and the similarities and contrasts of how I’ve noticed my identity has been changing throughout my engagement and being newly married to my longtime boyfriend, now husband. These studies contributed valuable knowledge to the existing, though limited, andragogical literature on qualitative research methods. My hope going forward is that qualitative faculty continue this focus, beginning conversations about their classroom assessments to complete their own andragogical studies determining the impact of their teaching on their students’ learning.

What’s next?

            Now that I’m finished with my dissertation and my studies, I am happy to report that I have accepted a promotion at my job at Synergy Evaluation Institute, and I’ve also been given the opportunity to teach qualitative research methods courses as an adjunct in the ESM program. I’m excited to continue being associated with the program and teach future ESM students. Being in the ESM program at UTK, while difficult at times, has also been a joy. The ESM program encouraged me to explore my varied interests and ultimately supported me as I grew professionally as an evaluator and scholar. The program accommodated and respected me as a working professional, and I highly recommend the program to any student with an interest in working with data as an evaluator, assessment professional, statistician, qualitative researcher, faculty, or all of the above. There’s a place for all in ESM.

Resources

Journal article citation

Massey, M.C., & Cain, L.K. (In press). Interviewing the interviewers: How qualitative faculty assess interviews. The Qualitative Report.

Books specifically about Qualitative Research Methods Andragogy

Eisenhart, M., & Jurow, A. S. (2011). Teaching qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), The SAGE handbook of qualitative research (4th ed., pp. 699-714). Sage.

Hurworth, R. E. (2008). Teaching qualitative research: Cases and issues. Sense Publishers.

Swaminathan, R., & Mulvihill, T. M. (2018). Teaching qualitative research: Strategies for engaging emerging scholars. Guilford Publications.

Books to read to become familiar with Ethnodrama as a method

Leavy, P. (2015). Handbook of Arts-Based Research (2nd ed.). The Guilford Press.

Leavy, P. (2018). Handbook of Arts-Based Research (3rd ed.). The Guilford Press.

Saldana, J. (2016.) Ethnotheatre: Research from page to stage. Routledge. http://doi.org/10.4324/9781315428932

Most useful citations to become familiar with autoethnography as a method

Cooper, R., & Lilyea, B. V. (2022). I’m Interested in Autoethnography, but How Do I Do It?. The Qualitative Report, 27(1), 197-208. https://doi.org/10.46743/2160-3715/2022.5288

Ellis, C. (2004). The Ethnographic I: a methodological novel about
autoethnography
. AltaMira.

Ellis, C. (2013). Carrying the torch for autoethnography. In S. H. Jones, T. E. Adams., and C. Ellis (eds.) Handbook of Autoethnography (pp. 9-12). Left Coast Press.

Filed Under: Evaluation Methodology Blog

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • …
  • 14
  • Next Page »

Educational Leadership and Policy Studies

325 Bailey Education Complex
Knoxville, Tennessee 37996

Phone: 865-974-2214
Fax: 865.974.6146

The University of Tennessee, Knoxville
Knoxville, Tennessee 37996
865-974-1000

The flagship campus of the University of Tennessee System and partner in the Tennessee Transfer Pathway.

ADA Privacy Safety Title IX