The tendency for humans to give preferential attention to animate agents in their immediate surroundings has been well-documented and likely reflects an evolved specialization to a persistent adaptive problem. In uncertain or ambiguous cases, this tendency can result in an over-detection of animacy, as the potential costs of failing to detect an animate agent far outweigh those of mistaken identification. In line with this, it seems likely that humans have evolved a sensitivity to specific cues which are indicative of animacy such that the mere presence of these cues will lead to detection, regardless of the objective category membership of the entity in question. There exists a wealth of research speaking to this effect with regards to motion cues, specifically in terms of the capacity for self-propulsion and goal-directed action. Morphological cues have also been implicated - most especially the presence of facial features – as they specify a capacity for perceptual feedback from the environment, which is essential for goal-directed motion. However, it remains an open question as to whether the capacity for animacy detection is similarly sensitive to facial information in the absence of motion cues.
The experiments reported here attempted to address this question by implementing a novel task in which participants were asked to judge the animacy or inanimacy (or membership in animal or object categories) of different images: animals with and without visible facial features, and objects with and without visible facial features. Beyond replicating a general advantage for detecting animate agents over inanimate objects, the primary predictions for these experiments were that facial features would have a differential effect on performance, such that they would improve performance when visible in animals, and would hinder performance when visible in objects. Experiments 1a and 1b provided a preliminary confirmation of this pattern of responses using images of familiar and unfamiliar animals (e.g., dogs versus jellyfish), and unaltered images of objects with and without faces. Experiment 2 improved on the design of this task by more closely matching the sets of images (the same animals facing toward or away from the camera, and objects with faces which had been digitally altered to disrupt the facial features), and by changing the prompt of the task from yes/no judgments of animacy to categorization into animal or object groups. Experiment 3 examined the face inversion effect, or the failure to recognize familiar faces when their orientation is inverted, on animal-object categorization. Lastly, experiments 4 and 5 attempted to extend the findings from experiment 2 to preschool-aged children, by implementing a card sorting task (experiment 4) and a computerized animal detection task (experiment 5). The results of this series of experiments highlight the prominent role of facial features in detecting animate agents in one’s surroundings.