In previous work, we showed that a simple neurocomputa-tional model The Model, or TM) trained on the Ekman &Friesen Pictures of Facial Affect (POFA) dataset to catego-rize the images into the six basic expressions can account forwide array of data (albeit from a single study) on facial ex-pression processing. The model demonstrated categorical per-ception of facial expressions, as well as the so-called facialexpression circumplex, a circular configuration based on MDSresults that places the categories in the order happy, surprise,fear, sadness, anger and disgust. Somewhat ironically, the cir-cumplex in TM was generated from the similarity between thecategorical outputs of the network, i.e., the six numbers rep-resenting the probability of the category given the face. Here,we extend this work by 1) using a new dataset, NimsStims,that is much larger than POFA, and is not as tightly controlledfor the correct Facial Action Units; 2) using a completely dif-ferent neural network architecture, a Siamese Neural Network(SNN) that maps two faces through twin networks into a 2Dsimilarity space; and 3) training the network only implicitly,based on a teaching signal that pairs of faces are in either inthe same or different categories. Our results show that in thissetting, the network learns a representation that is very similarto the original circumplex. Fear and surprise overlap, whichis consistent with the inherent confusability between these twofacial expressions. Our results suggest that humans evolvedin such a way that nearby emotions are represented by similarappearances.