• Request Info
  • Visit
  • Apply
  • Give
  • Request Info
  • Visit
  • Apply
  • Give

Search

  • A-Z Index
  • Map

Educational Leadership and Policy Studies

  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Programs
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
  • About
  • Our People
    • Our People Overview
    • Faculty
    • Staff
    • Students
  • Academic Programs
    • Academic Programs Overview
    • Adult & Continuing Education
    • College Student Personnel
    • Educational Administration
    • Evaluation Programs
    • Higher Education Administration
    • Undergraduate Studies
  • Education Research & Opportunity Center
  • Admissions & Information
    • Admissions Overview
    • Graduate Forms, Handbooks, and Resources
    • Contact ELPS
Home » Archives for September 2024

Cuevas (CSP Faculty) Named a NASPA Pillar of the Profession

Archives for September 2024

Cuevas (CSP Faculty) Named a NASPA Pillar of the Profession

Cuevas (CSP Faculty) Named a NASPA Pillar of the Profession

September 26, 2024 by Jonah Hall

Cuevas (CSP Faculty) Named a NASPA Pillar of the Profession

By Beth Hall Davis – September 19, 2024

Courtesy of the University of Tennessee, Knoxville – Student Life

Frank Cuevas, vice chancellor for Student Life at UT, has been named as one of NASPA’s 2025 Pillars of the Profession. This award, one of the NASPA Foundation’s highest honors, recognizes exceptional members of the student affairs and higher education community for their work and contributions to the field.  

NASPA’s award honors individuals who have created a lasting impact at their institution, leaving a legacy of extraordinary service and have demonstrated sustained, lifetime professional distinction in the field of student affairs and/or higher education.

Cuevas has been with the university since 2010 and has held several different roles in that time. As vice chancellor, Cuevas and his leadership team are responsible for student care and support, health and wellness initiatives, and leadership and engagement opportunities. He oversees more than 450 staff members and 3.7 million square feet of facility space that includes the Student Union and on-campus housing.

The new class of pillars will be officially presented and honored during the 2025 NASPA annual conference in New Orleans.

Filed Under: News

The Parasocial Relationships in Social Media Survey: What is it and why did I create it?

The Parasocial Relationships in Social Media Survey: What is it and why did I create it?

September 15, 2024 by Jonah Hall

The Parasocial Relationships in Social Media Survey: What is it and why did I create it?

Author: Austin Boyd, Ph.D.

I love watching YouTube. When I began college, I gave up on cable and during my free time I started watching YouTube instead. There was just something about watching streamers and influencers that was more compelling and comforting for me. I spent a lot of time wondering why I enjoyed it more, and it wasn’t until graduate school that I learned the answer: Parasocial Relationships. 

Parasocial Rela​​tionships and Their Measures 

Coined by Horton and Wohl (1956), parasocial ​relationships are a ​type of relationship that is experienced between a spectator and a performer’s persona. Due to the nature of the interaction, these relationships are one-sided and cannot be reciprocated ​​by the performer with which they are made. At the time, television was the most effective medium through which parasocial relationships could be developed (Horton & Strauss, 1957; Horton & Wohl, 1956). However, as time and technology have progressed, the mediums for parasocial relationships to occur have expanded beyond television to include radio, literature, sports, politics, and social media, such as Facebook, Twitter, and, of course, YouTube.  

Over the past 65+ years, hundreds of articles have been published with different scales created to measure parasocial phenomena in a variety of different contexts. W​​hile many different scales exist, they are not interchangeable across contexts,​​ and none have been validated to measure parasocial relationships in a social media context. Many of the scales were developed for specific media contexts, and because of this, they do not lend well to assessing parasocial phenomena in other situations and other forms of media without modification.​ ​Using a measure that has not been validated, and is unsuitable for a population, may compromise the results, even if it was found to be valid in a different context (Stewart et al., 2012). Furthermore, research (e.g., Dibble et al., 2016; Schramm & Hartmann, 2008) has started to question the validity of these instruments. An assertion made by Dibble et al. (2016) states that most parasocial interaction scales have not undergone adequate tests of validation. 

The Parasocial Relationships in Social Media (PRISIM) Survey 

For my dissertation, I developed and began validating the scores of the Parasocial Relationships in Social Media (PRISM) Survey to measure the parasocial relationships that people develop with influences and other online celebrities through social media (Boyd et al., 2022; Boyd et al. 2024). The survey contains 22 items that were based on three well​​ established parasocial surveys: The Audience-Persona Interaction Scale (Auter & Palmgreen, 2000), Parasocial Interaction Scale (Rubin et al., 1985), and Celebrity-Persona Parasocial Interaction Scale (Bocarnea & Brown, 2007). ​​Participants are asked to indicate their level of agreement with each of the items using a five-point Likert scale.  

The 22 items comprise four constructs: Interest In, Knowledge Of, Identification With, and Interaction With. The first factor, ​Interest In​, contains seven items covering the level of concern for, perceived attractiveness of, and devotion to the celebrity and the content that they create. The second factor, ​Knowledge Of​, contains five items that deal specifically with the participants’ knowledge of the celebrity and desire to learn more about them. While similar to the Interest In construct, the items in this construct address the ​​participants’ curiosity and fascination with the celebrity, rather than their attachment to them. The third factor, ​Identification With​, includes six items addressing the perceived similarities, such as sharing qualities and opinions, between the celebrity and the participant. Finally, the fourth factor, ​Interaction With​, is four items which covers the social aspects involved with viewing the celebrity including the participants’ feelings of social and friendship connections with them. 

After creating the survey, we conducted a psychometric evaluation of the scale. This includes assessing the content, face, construct, convergent, and discriminant validity, as well as the internal consistency reliability and measurement invariance across different social media platforms. For full explanations of the methods and results used to validate the survey see Boyd et al. (2022) and Boyd et al. (2023). We have also created FAQ for those interested in using the survey, which can be found at https://austintboyd.github.io/prismsurvey/. Both articles and the PRISM survey have been published in open access journals to allow easy access for any researcher interested in conducting parasocial relationship research in the social media landscape. 

References 

Boyd, A. T., Morrow, J. A., & Rocconi, L. M. (2022). Development and Validation of the Parasocial Relationship in Social Media Survey. The Journal of Social Media in Society, 11(2), 192-208. 

Boyd, A. T., Rocconi, L. M., & Morrow, J. A. (2024). Construct Validation and Measurement Invariance of the Parasocial Relationships in Social Media Survey. PLoS ONE, 19(3). 

Dibble, J. L., Hartmann, T., & Rosaen, S. F. (2016). Parasocial interaction and parasocial relationship: Conceptual clarification and a critical assessment of measures. Human Communication Research, 42(1), 21–44. https://doi.org/10.1111/hcre.12063  

Horton, D., & Strauss, A. (1957). Interaction in Audience-Participation Shows. American Journal of Sociology, 62(6), 579–587. doi: 10.1086/222106 

Horton, D., & Wohl, R. R. (1956). Mass Communication and Para-Social Interaction. Psychiatry, 19(3), 215–229. doi: 10.1080/00332747.1956.11023049 

Schramm, H., & Hartmann, T. (2008). The psi-process scales. A new measure to assess the intensity and breadth of parasocial processes. Communications, 33(4). https://doi.org/10.1515/comm.2008.025  

Stewart, A. L., Thrasher, A. D., Goldberg, J., & Shea, J. A. (2012). A framework for understanding modifications to measures for diverse populations. Journal of Aging and Health, 24(6), 992–1017. https://doi.org/10.1177/0898264312440321  

Filed Under: Evaluation Methodology Blog

mlmhelper: An R helper package for estimating multilevel models

mlmhelper: An R helper package for estimating multilevel models

September 1, 2024 by Jonah Hall

mlmhelper: An R helper package for estimating multilevel models

Author: Louis Rocconi, Ph.D.

In this blog post, I want to introduce you to the R package mlmhelpr, which my colleague and ESM alumni, Dr. Anthony Schmidt, and I created. mlmhelpr is a collection of helper functions designed to streamline the process of running multilevel or linear mixed models in R. The package assists users with common tasks such as computing the intraclass correlation and design effect, centering variables, and estimating the proportion of variance explained at each level.  

Multilevel modeling, also known as linear mixed modeling or hierarchical linear modeling, is a statistical technique used to analyze nested data (e.g., students in schools) or longitudinal data. Both nested and longitudinal data often result in correlated observations, violating the assumption of independent observations. This issue is common in educational, social, health, and behavioral research. Multilevel modeling addresses this by modeling and accounting for the variability at each level, leading to more accurate estimates and inferences. 

The inspiration for developing mlmhelpr came from my experience teaching a multilevel modeling course. Throughout the semester, I found myself repeatedly writing custom functions to help students perform various tasks mentioned in our readings. Additionally, students often expressed frustration that lme4, the primary R package for estimating multilevel models, did not provide all the necessary information required by our textbooks and readings. After the semester ended, Anthony and I discussed the need to consolidate these functions into a single R package, making them accessible to everyone. mlmhelpr offers tests and statistics from many popular multilevel modeling textbooks such as Raudenbush and Bryk (2002), Hox et al. (2018), and Snijders and Bosker (2012), and like every other R package, it is free to use! 

The following is a list of package functions and descriptions.  

boot_se 

This function computes bootstrap standard errors and confidence intervals for fixed effects. This function is mainly a wrapper for lme4::bootMer function with the addition of confidence intervals and z-tests for fixed effects. This function can be useful in instances where robust_se does not work, such as with nonlinear models (e.g., glmer models). 

center 

This function refits a model using grand-mean centering, group-mean/within cluster centering (if a grouping variable is specified), or centering at a user-specified value. For additional information on centering variables in multilevel models, see Enders and Tofighi (2007). 

design_effect 

This function calculates the design effect, which quantifies the degree to which a sample deviates from a simple random sample. In the multilevel modeling context, this can be used to determine whether clustering will bias standard errors and whether the assumption of independence is held. 

hausman 

This function performs a Hausman test to test for differences between random- and fixed-effects models. This test determines whether there are significant differences between fixed-effect and random-effect models with similar specifications. If the test statistic is not statistically significant, a random effects model (i.e. a multilevel model) may be more suitable (i.e., efficient). The Hausman test is based on Fox (2016, p. 732, footnote 46). I consider this function experimental and would interpret the results with caution. 

icc 

This function calculates the intraclass correlation. The ICC represents the proportion of group-level variance to total variance. The ICC can be calculated for two or more levels in random-intercept models. For models with random slopes, it is advised to interpret results with caution. According to Kreft and De Leeuw (1998, p. 63), “The concept of intra-class correlation is based on a model with a random intercept only. No unique intra-class correlation can be calculated when a random slope is present in the model.” However, Snijders and Bosker (2012) offer a calculation to derive this value (equation 7.9), and their approach is implemented. For logistic models, the estimation method follows Hox et al. (2018, p. 107). For a discussion of different methods for estimating the intraclass correlation for binary responses, see Wu et al. (2012). 

ncv_tests 

This function computes three different non-constant variance tests: (1) the H test as discussed in Raudenbush and Bryk (2002, pp. 263-265) and Snijders and Bosker (2012, p. 159-160), (2) an approximate Levene’s test discussed by Hox et al. (2018, p. 238), and (3) a variation of the Breusch-Pagan test. The H test computes a standardized measure of dispersion for each level-2 group and detects heteroscedasticity in the form of between-group differences in the level-one residual variances. Levene’s test computes a one-way analysis of variance of the level-2 grouping variable on the squared residuals of the model. This test examines whether the variance of the residuals is the same in all groups. The Breusch-Pagan test regresses the squared residuals on the fitted model. A likelihood ratio test is used to compare this model with a null model that regresses the squared residuals on an empty model with the same random effects. This test examines whether the variance of the residuals depends on the predictor variables. 

plausible_values 

This function computes the plausible value range for random effects. The plausible values range is useful for gauging the magnitude of variation around fixed effects. For more information, see Raudenbush and Bryk (2002, p. 71) and Hoffman (2015, p. 166). 

r2_cor 

This function calculates the squared correlation between the observed and predicted values. This pseudo R-squared is similar to the R-squared used in OLS regression. It indicates the amount of variation in the outcome that is explained by the model (Peugh, 2010; Singer & Willett, 2003, p. 36). For additional pseudo-R2 measures, see the r2glmm and performance packages. 

r2_pve 

This function computes the proportion of variance explained for each random effect level in the model (i.e., level-1, level-2) as discussed by Raudenbush & Bryk (2002, p. 79). For additional pseudo-R2 measures, see the r2glmm and performance packages. 

reliability 

This function computes reliability coefficients for random effects according to Raudenbush and Bryk (2002) and Snijders and Bosker (2012). The reliability coefficient indicates how much of the variance in the random effect is due to true differences between clusters rather than random noise. The empirical Bayes estimator for the random effect combines the cluster mean and the grand mean, with the weight determined by the reliability coefficient. A reliability close to 1 puts more weight on the cluster mean while a reliability close to 0 puts more weight on the grand mean. 

robust_se 

This function computes robust standard errors for linear models. It is a wrapper function for the cluster-robust standard errors from the clubSandwich package that includes confidence intervals. See the clubSandwich package for additional information and mlmhelpr::boot_se for an alternative. 

taucov 

This function calculates the covariance between random intercepts and slopes. It is used to quickly get the covariance and correlation between intercepts and slopes. By default, lme4 only displays the correlation. 

As of August 26, 2024, the package has been downloaded 5,062 times! If you use R to estimate multilevel models, give mlmhelpr a try, and let me know if you find any errors or mistakes. I hope you find it helpful. If you are interested in creating your own R package, check out the excellent R Package development book by Wickham and Bryan: https://r-pkgs.org/.  

Happy modeling!  

Resources and References 

Enders, C. K., & Tofighi, D. (2007). Centering predictor variables in cross-sectional multilevel models: A new look at an old issue. Psychological Methods, 12(2), 121-138.  

Fox, J. (2016). Applied regression analysis and generalized linear models (3rd ed.). SAGE Publications. 

Hoffman, L. (2015). Longitudinal analysis: Modeling within-person fluctuation and change. Routledge. 

Hox, J. J., Moerbeek, M., & van de Schoot, R. (2018). Multilevel analysis: Techniques and applications (3rd ed.). Routledge. 

Kreft, I. G. G., & De Leeuw, J. (1998). Introducing multilevel modeling. SAGE Publications. 

Peugh, J. L. (2010). A practical guide to multilevel modeling. Journal of School Psychology, 48(1), 85-112.  

Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (2nd ed.). SAGE Publications. 

Singer, J. D., & Willett, J. B. (2003). Applied longitudinal data analysis: Modeling change and event occurrence. Oxford University Press. 

Snijders, T. A. B., & Bosker, R. J. (2012). Multilevel analysis: An introduction to basic and advanced multilevel modeling (2nd ed.). SAGE Publications. 

Wu, S., Crespi, C. M., & Wong, W. K. (2012). Comparison of methods for estimating the intraclass correlation coefficient for binary responses in cancer prevention cluster randomized trials. Contemporary Clinical Trials, 33(5), 869. 

Filed Under: Evaluation Methodology Blog

Educational Leadership and Policy Studies

325 Bailey Education Complex
Knoxville, Tennessee 37996

Phone: 865-974-2214
Fax: 865.974.6146

The University of Tennessee, Knoxville
Knoxville, Tennessee 37996
865-974-1000

The flagship campus of the University of Tennessee System and partner in the Tennessee Transfer Pathway.

ADA Privacy Safety Title IX