Do you Hear What I See? The Voice and Face of a Talker Similarly Influence the Speech of Multiple Listeners
- Author(s): Sanchez, Kauyumari
- Advisor(s): Rosenblum, Lawrence
- et al.
Speech alignment occurs when interlocutors shift their speech to become more similar to each other. Alignment can also be found when one is asked to shadow (quickly say out-loud) perceived words recorded from a model. Prior investigations on alignment have addressed whether shadowers of auditory (e.g. Goldinger, 1998) or visual (e.g. Miller, Sanchez, & Rosenblum, 2010) speech would shift in the direction of a model. However, it is unknown whether multiple shadowers align to a specific model in the same ways or uniquely. This Dissertation addressed two questions: Are utterances of shadowers of the same model more similar to each other than they are to the utterances of shadowers of a different model? Does the sensory modality of the shadowed speech affect the perceptual similarity between the shadowers of the same model? In Experiment Series 1, evidence that shadowers similarly aligned to the auditory speech of a model was obtained. In Experiment 1a perceptual raters judged the utterances of shadowers of the same heard model as being more similar than utterances from shadowers of another heard model. In Experiment 1b it was found that the results from Experiment 1a were due to speech style shifts towards those of the shadowed model and that the shadowers were not similar before exposure to the model. Acoustical analyses of the shadowed words also revealed that shadowers of the same model were more similar along some acoustic dimensions to each other than words from shadowers of a different model. The articulatory dimensions behind these similar acoustic dimensions could also potentially be perceived in visible articulation, suggesting that the results from Experiment 1a might also be found for shadowers of visual speech (lip-reading). In Experiment Series 2, evidence that shadowers similarly aligned to the visual speech of a specific model was obtained. In Experiment 2a perceptual raters judged the utterances of shadowers of the same lip-read model as being more similar than the shadowed utterances of the other lip-read model. Experiment 2b compared auditory and visually shadowed speech of shadowers of the same or a different model. Utterances of multiple shadowers of the same model were judged as being more similar than those of shadowers of another model, regardless of whether the model's speech was shadowed auditorily or visually. These results suggest that shadowers align to similar properties of a specific model's speech even when doing so based on different modalities. Implications for episodic encoding and gestural theories are discussed.