Skip to main content
eScholarship
Open Access Publications from the University of California

A parsimonious weight function for modeling publication bias

  • Author(s): Citkowicz, Martyna
  • Advisor(s): Vevea, Jack L.
  • et al.
Abstract

Quantitative research literature is often biased because studies that fail to find a significant effect (or that demonstrate effects that are not in the desired or expected direction) are less likely to be published. This phenomenon, termed publication bias, can cause problems when researchers attempt to synthesize results through the set of techniques known as meta-analysis. Various methods exist that estimate and correct meta-analyses for publication bias. However, no single method exists that 1) can account for continuous moderators by including them within the model, 2) allow for substantial data heterogeneity, 3) produce an adjusted mean effect size, 4) include a formal test for publication bias, and 5) allow for correction when only a small number of effects is included in the analysis. This dissertation develops a method that tries to encompass those characteristics. The model uses the beta density as a weight function that estimates the selection process in order to produce adjusted parameter estimates. The model is implemented both by maximum-likelihood (ML) and by Bayesian estimation. The utility of the model is assessed by simulations and through use on real data sets. The ML simulations indicate that the likelihood-ratio test has good Type I error performance. However, the test is not very powerful for small data sets. Coverage rates indicate that the model’s 95% confidence intervals based on adjusted parameter estimates (those that correct for bias) are more likely to contain the true parameter values than are CIs around the unadjusted parameter estimates (those that do not account for bias). Bias and root mean squared errors of the estimates are better for the adjusted mean effect than the unadjusted mean effect whenever bias is present. The ML simulations also show that the model is good at distinguishing systematic study differences from publication bias. The utility of the Bayesian implementation of the model is demonstrated in two ways: 1) when ML estimation produces nonsensical parameter estimates for real data sets, Bayesian estimation does a good job of adjusting to appropriate estimates; and 2) when bias is present in small data sets, the adjusted Bayesian parameter estimates are generally closer to the true population values than the adjusted ML estimates.

Main Content
Current View