Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Behavioral Health Intervention Effectiveness and Multiple Testing

Abstract

Behavioral health interventions (BHI) have unique features that pose statistical challenges, both in the controlled trial and implementation stages. Family and school-based preventive BHI involve skill-building modules delivered by trained individuals which aim to improve well-being by promoting resiliency, empathy, communication, emotional regulation, and other related skills. Outcomes are measured through validated questionnaires given before and after the intervention. When establishing and evidence-base, researchers often conduct a randomized controlled trial of the BHI, through which they measure many, potentially correlated, outcomes. If the investigator hypothesizes that the intervention impacts each outcome measured, one must consider the problem of multiple testing when determining the overall efficacy of the intervention. It would be remiss to treat these tests as independent and existing methods for dependent outcomes require specification of the unknown correlation structure. To address this situation, we propose use of a permutation method to determine statistical evidence of an overall intervention effect. Two possible versions of a permutation test are presented, one that focuses on the number of significant individual hypothesis tests needed to indicate overall efficacy, and one that uses the magnitudes of the p-values for the individual tests to calculate an overall p-value for intervention efficacy.

Once efficacy has been demonstrated in an initial randomized trial, BHIs are often broadly implemented in real-world settings where adaptations to intervention protocol naturally arise. Prevention scientists have recognized the need for ongoing evaluation of intervention adaptations. Again, we must consider the problem of multiple testing because the total number of hypothesis tests is unknown (and potentially unlimited) as data is continually collected. Existing statistical methods fall short when using continuously-generated real-world evidence to compare concurrent intervention versions. We propose combining methods used for observational data with methods for adaptive platform clinical trials. Since the data are observational, we use a pre-processing step to account for differences in covariate distributions among intervention groups. This allows us to more accurately estimate intervention effectiveness and make comparisons. We have developed a Bayesian analysis framework for interim decision making throughout the platform trial which allows us to determine the superiority or futility of concurrent intervention versions when compared to the current best version. Performance of the analysis framework is examined using simulations. Since type I error rate and power are not well defined in this context, we develop new metrics with which to evaluate the method. We demonstrate the potential utility of the combined framework using BHI data collected from a classroom-based resilience curriculum administered to Los Angeles Unified School District (LAUSD) high school students.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View