Audiovisual Speech Processing Laboratory
Our lab studies whether and how infants, children, and adults use lip-reading and other visual speech cues to better understand speech in noisy backgrounds. We also study how hearing loss and hearing aids affect our ability to use visual speech cues.
Our studies include:
- analyzing the physical relationship between acoustic and visual speech signals
- testing how well adults and children perceive auditory and audiovisual speech in different backgrounds
- looking for perceptual, cognitive, and linguistic skills that underlie development of the ability to use visual speech cues
We collaborate with the Audibility, Perception, and Cognition Laboratory and the Human Auditory Development Laboratory, and depend on the Center for Perception and Communication in Children for support in conducting our experiments. If you are interested in participating in our studies, please sign up
here to join our list of research volunteers or contact
Dr. Kaylah Lalonde or a member of the research team at
When we listen to people speak, there is often noise in the background. This diminishes our ability to perceive speech. As adults, we look at visual cues on the talker's face (i.e., lipreading) to compensate for noisy environments. Children have more difficulty understanding speech in noisy environments, and less is known about their ability to use visual speech cues in noisy. The purpose of this study is to learn how children use lip reading or other visual information on a talker's face to help understand speech in noisy backgrounds. For this research, we test children between 5 and 19 years of age and adults between 19 and 35 years of age.