People’s beliefs about what is morally right and wrong vary widely between individuals, contexts, and cultures;however it is thought that they are governed by core latent constructs. While there is evidence that these constructs are reflectedin natural language, this requires further testing. We demonstrate that the structure of moral values in natural discourse can bemodeled by applying factor analyses to distributed representations of morally relevant terms learned by a neural network. Wefirst demonstrate that robust latent constructs can be estimated from the covariance of distributed representations of constructexemplars. We then test whether the factor structure proposed by Moral Foundations Theory (MFT) is reflected in naturallanguage. Finally, we conduct a bottom-up investigation of the structure of moral values in natural language using free-responses reported by participants. Ultimately, we find evidence that the representation of moral values in natural languagepartially corresponds to MFT.