This paper describes a semantically based computational theory of natural language comprehension. The theory argues for a semantically rich lexicon whose entries can be described as monosemic, generative and image-like. The comprehension process uses the basic definition of a word to decide how new information is to be combined with what has been interpreted so far. Next, and more importantly, the background information is used to generate the meaning of the combined words. Other semantically based approaches are also reviewed, one each from the disciplines of AI, Cognitive Science, and Linguistics.