The ability to perceive and categorize spoken words is a remarkable capability of the human auditory system. Songbirds are one of the few animal orders that face auditory tasks of similar complexity. In this dissertation, I have analyzed auditory responses in the zebra finch at both behavioral (chapter one) and neuronal levels (chapter two). In chapter one, I use an operant condition paradigm to show that female songbirds are able to identify the social context in which a male's song was sung (alone, or directed towards a female). Females require only a short segment of recorded song (a single "motif") to perform this task. I also show that, given only simple temporal information about the stimuli, a machine-learning algorithm can classify most males' motifs according to social context. However, the model's behavior was not consistent with that of the females on individual stimuli, indicating that spectral and temporal cues beyond those tested by the model influence the birds' behavior. Finally, lesions of a nucleus required for social context-dependent differences in spectral variability caused most males to produce songs whose social context was still detectable to females performing the task. Chapter two describes the results of a series of acute electrophysiological recordings in anesthetized female zebra finches. I analyze the responses of single neurons in the songbird auditory forebrain to two types of stimuli: birdsong and an artificially generated stimulus. Using a relatively unbiased mutual-information-based technique, I show that the responses of these neurons change dramatically depending on the stimulus. Across different stages of the ascending auditory pathway, song stimuli gave rise to more complex receptive fields than the artificial stimulus. Receptive fields calculated in response to the song stimuli also had excellent predictive value, far surpassing that of the receptive fields calculated from the artificial stimuli. Our results indicate that for many neurons in the songbird auditory forebrain, receptive field structure is highly dependent on stimulus statistics, and that receptive fields constructed in response to different stimulus classes contain surprisingly little information regarding responses to other sounds.
This thesis examines the neural changes that contribute to the formation, storage, retrieval and extinction of a learned association between a stimulus and a reward. A number of questions were answered in this thesis to provide insight upon the neural substrates of several goal-directed behaviors: What neural changes mediate the initial formation of an associative memory between a stimulus and a reward? What are the synaptic changes that correspond to the development of a change in task-relevant neuronal firing? What is the mechanism of these synaptic changes, and do they have a causal relationship? How are complex emotions such as frustration represented in the brain? How are reward-associated cues endowed with the power to guide goal-directed behaviors in the absence of primary rewards? Here I show that behavior improves with the rapid recruitment of amygdala neurons to the ensemble encoding a reward-predictive cue, and that this change is mediated by the rapid strengthening of thalamic synapses onto amygdala neurons by a postsynaptic increase of AMPAR-mediated currents. These synaptic changes, in addition to the acquisition of the task, depend on NMDAR activation. Amygdala neurons that store the memory of a reward are activated when an animal compares the expected reward with the unexpected omission of that reward. Finally, distinct populations of amygdala neurons reflect the motivating and reinforcing properties of a cue endowed with the emotional significance to guide behavior.
Songbirds, like humans, learn to produce and to recognize complex, species-specific sounds, providing a biologically tractable model to study the neural mechanisms of speech production and perception. I used chronic recording from single neurons, and operant behavioral techniques to ask how complex sounds are represented in the songbird forebrain, and how this representation may be related to the birds' perception of song. I found that neurons in field L, the avian analog of the human primary auditory cortex, represent three different types of modulations found in natural sounds: spectral modulations, temporal modulations, and spectro-temporal modulations. Neurons specialized for different modulations have different physiological properties and are localized to different parts of field L. The response properties of these neurons depend nonlinearly on the average intensity of the stimulus. At high intensities, they respond only to differences in sound energy between nearby frequency or times, while at low intensities they integrate information from nearby frequencies and times. This nonlinearity is shared with the visual system and may represent a computational principle of sensory encoding. Finally, I used operant techniques to ask whether songbirds could generalize a learned song discrimination task to songs altered in pitch, duration, or volume. I found that birds generalized correctly to songs altered in duration but not to those altered in pitch or volume. These data suggest that birds use the spatial pattern of neurons activated by a song rather than the temporal pattern of neural activation to determine what song they heard.
Cookie SettingseScholarship uses cookies to ensure you have the best experience on our website. You can manage which cookies you want us to use.Our Privacy Statement includes more details on the cookies we use and how we protect your privacy.