We propose a neural dynamic architecture that models negation processing. The architecture receives a visual scene and a relational phrase like ``The blue object is not to the right of the yellow object'' or ``The blue object is to the right of the green object'' as input, and autonomously determines whether the phrase correctly describes the visual scene. The model is built out of empirically founded components for perceptually grounded cognition and constrained by neural principles. We demonstrate that the model can explain two commonly found reaction time effects: the negation effect in which reaction times are higher for negated than for affirmative phrases, and the polarity-by-truth-value interaction effect in which reaction times for false negated phrases are faster than those for true negated phrases whereas the opposite is true for affirmative phrases. The model is consistent with some aspects of the two-step simulation theory.