Seeing a talking face matters: Infants' segmentation of continuous auditory-visual speech
Fecha
2023Autor
Tan, Sok Hui Jessica
Kalashnikova, Marina
Burnham, Denis
Metadatos
Mostrar el registro completo del ítem
Tan, S. H. J., Kalashnikova, M., & Burnham, D. (2023). Seeing a talking face matters: Infants' segmentation of continuous auditory-visual speech. Infancy, 28( 2), 277– 300. https://doi.org/10.1111/infa.12509
Infancy
Infancy
Resumen
Visual speech cues from a speaker's talking face aid speech
segmentation in adults, but despite the importance of
speech segmentation in language acquisition, little is known
about the possible influence of visual speech on infants'
speech segmentation. Here, to investigate whether there
is facilitation of speech segmentation by visual information,
two groups of English-learning 7-month-old infants
were presented with continuous speech passages, one
group with auditory-only (AO) speech and the other with
auditory-visual (AV) speech. Additionally, the possible
relation between infants' relative attention to the speaker's
mouth versus eye regions and their segmentation performance
was examined. Both the AO and the AV groups of
infants successfully segmented words from the continuous
speech stream, but segmentation performance persisted for
longer for infants in the AV group. Interestingly, while AV
group infants showed no significant relation between the
relative amount of time spent fixating the speaker's mouth
versus eyes and word segmentation, their attention to the
mouth was greater than that of AO group infants, especially
early in test trials. The results are discussed in relation
to the possible pathways through which visual speech
cues aid speech perception.