Humans turn abstract referents and discourse structuresinto gesture using metaphors. The semantic relation be-tween abstract communicative intentions and their phys-ical realization in gesture is a question that has not beenfully addressed. Our hypothesis is that a limited set ofprimary metaphors and image schemas underlies a widerange of gestures. Our analysis of a video corpus sup-ports this view: over 90% of the gestures in the corpus arestructured by image schemas via a limited set of primarymetaphors. This analysis informs the extension of a com-putational model that grounds various communicative in-tentions to a physical, embodied context, using those pri-mary metaphors and image schemas. This model is usedto generate gesture performances for virtual characters.