The present study investigated online neural indices of statistical learning of silent speech. Adult participants were exposed to naturally mouthed, silent syllable streams in an artificial language in two conditions. In one condition, 12 syllables occurred randomly; in the other the syllables were structured into four syllable triplets, i.e. statistical words. In the recorded EEG signal, phase synchronisation in neural oscillations was assessed at the rate of syllables and words occurring in the exposure streams. Largest phase synchronisation was detected for the word rate during exposure to the structured stream. Moreover, the neural synchronisation to word rate increased throughout the exposure within the structured stream. In a behavioural post-test, however, no learning effects were detected. The EEG results demonstrate sensitivity to statistical regularities in viewed silent speech. These findings indicate that statistical learning in speech and language can be effectively measured online even in the absence of auditory cues.