Recent work on adversarial examples demonstrates a brittleness of many state-of-the-art machine learning systems. Weinvestigate one human analog, asking: What fraction of natural speech can be turned into illusions which alter humans per-ception or result in different people having significantly different perceptions? Using generated videos, we first empiricallyestimate that 17% of words occurring in natural speech have some susceptibility to the McGurk effect–the phenomenonby which adding a carefully chosen video clip to the audio channel affects the viewers perception of the message. We de-velop a bag-of-phonemes prediction model for word-level illusionability that we extend with natural language modeling tobuild a sentence-level framework. We train an instantiation using Amazon Mechanical Turk evaluations on sentence-levelillusions. Finally we generate several new instances of the Yanny/Laurel illusion, demonstrating that it is not an isolatedoccurrence. The surprising density of illusionable instances warrants further investigation from cognitive and securityperspectives.