Our Dynamic Neural Field (DNF) model aims to simulate audiovisual integration in speech perception, including thewell-known McGurk effect (McGurk & MacDonald, 1976). The classic McGurk effect is characterized by a fusion ef-fect, whereby incongruent audio and visual stimuli are fused into a single percept, however other interesting audiovisualeffects are present in the extant literature. Our DNF model uses the same architecture and parameters across stimu-lus combinations to simulate a host of audiovisual illusory effects as well as audiovisually congruent, auditory-only,and visual-only controls. Our simulation results replicate rates of visual-dominant percepts, audiovisual fusion percepts,auditory-dominant percepts, and auditory dichotic fusion found in the extant literature, and illustrate how a complex patternof responses across different stimuli configurations can arise from common neural dynamics involved in binding informa-tion across sensory modalities. We are currently exploring how hemodynamic response predictions generated through ourneural simulations relate to real-time behavior.