When we communicate face-to-face, being able to see the person we're talking to makes it easier to understand speech, especially when the acoustic signal is degraded by noise or hearing loss. We call these benefits audiovisual speech enhancement.
Development of Audiovisual Speech Enhancement in Children
Visual speech helps in many ways. It helps us to know when to listen, fills in missing auditory speech information, and helps to separate speech from similar competing sounds. We are studying how well children at various ages can use visual speech in these different ways. Experiments examine how sensitive children are to different audiovisual cues and how much these different mechanisms contribute to individual differences in children's audiovisual speech enhancement.
This project is funded by a NIH Centers for Biomedical Research Excellence (COBRE) grant (NIH-NIGMS / 5P20GM109023-04).
Factors Influencing Audiovisual Speech Benefit in Children with Hearing Loss
Frequency specific audibility is a measure of how much speech a person can hear at each pitch. The more hearing loss there is at a certain pitch, the poorer the audibility. Our lab is studying how frequency specific audibility affects how much speech children can understand when talking with someone face-to-face. We expect that lipreading is more helpful for children who have hearing loss at higher pitches because lipreading helps fill in the “blanks" of the high-pitched speech sounds they cannot hear. Using the information from this study, audiologists will be able to use frequency specific audibility to better estimate how much speech a child with hearing loss can understand when talking with someone face-to-face. This may help guide decisions about speech and language therapy.
This project is funded by the NIH National Institute on Deafness and Other Communication Disorders (COBRE) grant (NIH-NIGMS / 1R21DC020544) and is a collaboration with the
Auditory Perceptual Encoding Laboratory and the
Audibility, Perception and Cognition Laboratory.
Effects of face masks on word learning in preschool age children
Word learning in early childhood is important for long-term academic and occupational success. Preschool-age children learn words solely from spoken input. Face masks creates barriers to the acoustic and visual components of speech. The purpose of this research is to test the extent to which face masks disrupt word learning, which will help us to understand how to support word learning when access to a high-fidelity speech input is reduced.
This project is funded by a grant from NIH National Institute on Deafness and Other Communication Disorders (NIH-NIDCD / 3R01HD100439-02) and is a collaboration with Rush University and the BTNRH
Language Learning and Memory Laboratory.