Theories for visually guided action account for online con-trol in the presence of reliable sources of visual information,and predictive control to compensate for visuo-motor delayand temporary occlusion. In this study, we characterize thetemporal relationship between information integration windowand prediction distance using computational models. Subjectswere immersed in a simulated environment and attempted tocatch virtual balls that were transiently “blanked” during flight.Recurrent neural networks were trained to reproduce subjectsgaze and hand movements during blank. The models success-fully predict gaze behavior within 3◦, and hand movementswithin 8.5 cm as far as 500 ms in time, with integration windowas short as 27 ms. Furthermore, we quantified the contributionof each input source of information to motor output throughan ablation study. The model is a proof-of-concept for predic-tion as a discrete mapping between information integrated overtime and a temporally distant motor output.