In the first year of life, infants’ speech perception becomesattuned to the sounds of their native language. Many accountsof this early phonetic learning exist, but computational modelspredicting the attunement patterns observed in infants fromthe speech input they hear have been lacking. A recent studypresented the first such model, drawing on algorithms proposedfor unsupervised learning from naturalistic speech, and tested iton a single phone contrast. Here we study five such algorithms,selected for their potential cognitive relevance. We simulatephonetic learning with each algorithm and perform tests onthree phone contrasts from different languages, comparing theresults to infants’ discrimination patterns. The five models dis-play varying degrees of agreement with empirical observations,showing that our approach can help decide between candidatemechanisms for early phonetic learning, and providing insightinto which aspects of the models are critical for capturing in-fants’ perceptual development.