Recent Machine Learning systems in vision and languageprocessing have drawn attention to single-word vector spaces,where concepts are represented by a set of basic features orattributes based on textual and perceptual input. However,such representations are still shallow and fall short fromsymbol grounding. In contrast, Grounded Cognition theoriessuch as CAR (Concept Attribute Representation; Binder etal., 2009) provide an intrinsic analysis of word meaning interms of sensory, motor, spatial, temporal, affective andsocial features, as well as a mapping to corresponding brainnetworks. Building on this theory, this research aims tounderstand an intriguing effect of grounding, i.e. how wordmeaning changes depending on context. CAR representationsof words are mapped to fMRI images of subjects readingdifferent sentences, and the contributions of each worddetermined through Multiple Linear Regression and theFGREP nonlinear neural network. As a result, the FGREPmodel in particular identifies significant changes on theCARs for the same word used in different sentences, thussupporting the hypothesis that context adapts the meaning ofwords in the brain. In future work, such context-modifiedword vectors could be used as representations for a naturallanguage processing system, making it more effective androbust.