Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Impacts of Model Specification on Statistical Power and Type I Error Rate in Moderated Mediation Analysis

Abstract

Moderated mediation models are used commonly in psychological research and other academic fields to model how and when effects occur. Researchers must choose which paths from the mediation model are moderated when specifying this type of model. This dissertation examines how model specification impacts statistical power and type I error rate for the index of moderated mediation. In a meta-analytic review, we found that six model specifications account for 85% of published moderated mediation analyses, so this dissertation focuses on those six models. When considering power and type I error rate, two attributes matter: the data analysis model, and the data generating process (DGP). In reference to the DGP, the data analysis model can either be correctly specified, over-specified, under-specified, or completely misspecified. A Monte Carlo simulation study was run to examine the impacts of model specification on power and type I error rate, and results were analyzed using multi-level logistic regression along with figures and tables. Over-specified models had lower statistical power to detect a significant index of moderated mediation compared to correctly specified models. Under-specified models had slightly higher power when moderation on the direct effect was omitted, but otherwise, under-specified models had much lower power than correctly specified models. Parameter bias was also unacceptably high for most under-specified models. Completely misspecified models generally still had acceptable type I error rates, with a notable exception of inflated type I error rates where moderation was omitted from the direct effect. Overall, while many published moderated mediation models may not have large enough sample sizes for adequate statistical power, over-specifying or under-specifying models can lead to lower statistical power as well, while complete model misspecification risks an inflated type I error rate.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View