Traditionally, it has been assumed that rules are necessary to explain language acquisition. Recently, Marcus, Vijayan, Rao, & Vishton (1999) have provided behavioral evidence which they claim can only be explained by invoking algebraic rules. In the first part of this paper, we show that contrary to these claims an existing simple recurrent network model of word segmentation can fit the relevant data without invoking any rules. Importantly, the model closely replicates the experimental conditions, and no changes are made to the model to accommodate the data. The second part provides a corpus analysis inspired by this model, demonstrating that lexical stress changes the basic representational landscape over which statistical learning takes place. This change makes the task of word segmentation easier for statistical learning models, and further obviates the need for lexical stress rules to explain the bias towards trochaic stress patterns in English. Together the connectionist simulations and the corpus analysis show that statistical learning devices are sufficiently powerful to eliminate the need for rules in an important part of language acquisition.