The goal of research in this lab is to understand listeners' ability to make sense of complex sounds, and to understand how this ability develops starting in infancy. These questions are explored in different domains. Infant-directed speech (babytalk or mother), music, and other complex auditory patterns contain rich temporal and pitch information that infants must resolve, encode and process in order to make sense of these signals. Infants' perceptual abilities are tested using a variety of behavioral methods. Other studies examine speech production in mother-infant dyads, using acoustical analyses to track the developmental changes in the pitch and temporal properties of mother-infant interaction.
The laboratory consists of a sound attenuated booth, a control room, and workspace for research assistants. The booth is set up to test infants in a variety of behavioral paradigms (e.g., conditioned head-turn and preference procedures), including eye-tracking (using a faceLAB 4.5 system). Auditory and visual stimuli are presented via GSI speakers and three LCD display controlled by a PowerMac computer in the adjacent control room. Max/MSP software is used for real-time signal processing, presenting stimuli, recording responses, and the coordination of other aspects of the experimental procedures. The laboratory also contains iMac computers for recording and analyzing audio/video infant interactions, using Praat.
This lab is directed by
Nicholas Smith, Ph.D., and benefits from collaboration with Mary Pat Moeller, Ph.D. and Mark VanDam, Ph.D.
Temporal processing and perceptual organization. The sound patterns of speech and music consist of rapid changes in the energy in different frequency regions. The auditory system has been characterized as an array of frequency tuned channels. The central auditory processes responsible for analyzing the auditory scene and perceiving patterns must resolve and integrate information across these channels in order to accurately represent the stimulus or sound source. Our research uses gap detection and temporal order judgment tasks to examine these between-channel processes, and how they are affected by stimulus, context, and task related factors, as well as perceptual learning and development.
Mother-infant speech interaction. Infant-directed (ID) speech exhibits characteristic prosodic modifications: higher overall pitch, expanded pitch range, increased repetition, extended pause. Infants have strong preferences for ID speech and respond in ways that promote this of kind of interaction. The main goals of our research are to examine (1) how mothersí ID speech changes over the course of development, and how it may be affected by hearing loss in the child, (2) the development of prosody in children with normal hearing and with hearing loss and (3) the rhythmic aspects of mother-child vocal interaction.
Temporal processing and perceptual organization. Being able to hear is the first step toward making sense of sound. In the real world, we hear rapidly changing combinations of sounds from different sources (e.g., a group of people talking on a busy city street). Central auditory processing in the brain needs to sort this auditory input and figure out which pieces go together, and how.
Mother-infant speech interaction. When adults talk to infants and children, their speech becomes more musical. This serves a number of functions in development. It helps to regulate the infants' emotions, but may also help infants to better perceive speech and learn language. Our research examines how this musical aspect of speech changes over the course of development, and how these changes might be vary depending on whether the child has normal hearing or different degrees of hearing loss.
For more information about current studies, or to have your infant or child participate in future studies, please contact Dr. Nicholas Smith at