In randomized trials, appropriately adjusting for baseline variables and
short-term outcomes can lead to increased precision and reduced sample size. We
examine the impact of such adjustment in group sequential designs, i.e.,
designs with preplanned interim analyses where enrollment may be stopped early
for efficacy or futility. We address the following questions: how much
precision gain can be obtained by appropriately adjusting for baseline
variables and a short-term outcome? How is this precision gain impacted by
factors such as the proportion of pipeline participants (those who enrolled but
haven't yet had their primary outcomes measured) and treatment effect
heterogeneity? What is the resulting impact on power and average sample size in
a group sequential design? We derive an asymptotic formula that decomposes the
overall precision gain from adjusting for baseline variables and a short-term
outcome into contributions from factors mentioned above, for efficient
estimators in the model that only assumes randomization and independent
censoring. We use our formula to approximate the precision gain from a targeted
minimum loss-based estimator applied to data from a completed trial of a new
surgical intervention for stroke. Our formula implies that (for an efficient
estimator) adjusting for a prognostic baseline variable leads to at least as
much asymptotic precision gain as adjusting for an equally prognostic
short-term outcome. In many cases, such as our stroke trial application, the
former leads to substantially greater precision gains than the latter. In our
simulation study, we show how precision gains from adjustment can be converted
into sample size reductions (even when there is no treatment effect).