To investigate how co-speech gestures modulate linguistic understanding, we experimentally test two Bayesian Pragmatic models. We identify the semantic effect of a spoken or gestural utterance with the change it makes in listener’s probabilistic predictions of the speaker’s communicative intentions. We focus on action-expressing gestures and the respective verbs that correspond to action-affording instruments or, respectively, their denoting nouns. Combining Pustejovsky’s’ Generative Lexicon approach with Gibson’s affordance theory, we ask: (1) Does a co-speech gesture make any difference for semantic comprehension and the corresponding probabilistic prediction? (2) Is the semantic effect of a gesture similar or identical to the one of the corresponding verb? (3) To which extent does the gesture’s semantic effect depend on the listener’s recognition of the gesture as an expression of the corresponding verb? (4) Does the comprehended affordance predict the instrument better than the co-occurrence statistics (GloVe) regarding the verb and the noun?