Many events that humans and other organisms experience contain regularities in which certain elements within an event predict certain others. While some of these regularities involve tracking the co-occurrences between temporarily adjacent stimuli, others involve tracking the co-occurrences between temporarily distant stimuli (i.e., non-adjacent dependencies, NADs). Prior research shows robust learning of adjacent dependencies in humans and other species, whereas learning NADs is more difficult, and often requires support from properties of the stimulus to help learners notice the NADs. Here we report on four experiments that examined NAD learning from various types of visual stimuli. The results suggest that continuous movements aid the acquisition of NADs. We also found that human motion leads to more robust NAD learning compared to object motions, perhaps because of a richer representation. This richer representation could result in better memory and recall, and provide a stronger signal for NAD learning.