Learning to categorize requires distinguishing category members from non-members by detecting the features that covary with membership. Human subjects were trained to sort visual textures into two categories by trial and error with corrective feedback. Difficulty levels were increased by decreasing the proportion of covariant features. Pairwise similarity judgments were tested before and after category learning. Three effects were observed: (1) The lower the proportion of covariant features, the more trials it took to learn the category and the fewer the subjects who succeeded in learning it. After training, (2) perceived pairwise distance increased between categories and, to a lesser extent, (3) decreased within categories, at all levels of difficulty, but only for successful learners. This perceived between-category separation and within-category compression is called categorical perception (CP). A very simple neural network model for category learning using uniform binary (0/1) features showed similar CP effects. CP may occur because learning to selectively detect covariant features and ignore non-covariant features reduces the dimensionality of perceived similarity space. In addition to (1) – (3), the nets showed (4) a strong negative correlation between the proportion of covariant features and the size of the CP effect. This correlation was not evident in the human subjects, probably because, unlike the formal binary features of the input to the nets, which were all uniform, the visual features of the human inputs varied in difficulty.