The topic of this paper is the development of dynamic lexical representations using artificial neural networks. In previous work on connectionist natural language processing a lot of approaches have experimented with manually encoded lexicon representations for words. However from a cognitive point of view as well as an engineering point of view it is difficult to find appropriate representations for the lexicon entries for a given task. In this context, this paper explores the use of building word representations during a training process for a particular task. Using simple recurrent networks, principal component analysis and hierarchical clustering we show how lexical representations can be formed dynamically, especially for neural network modules in large, real-world, computational speech-language modeIs.