Speech segmentation is facilitated by visual cues

Fecha de publicación

2019-07-05T16:17:53Z

2019-07-05T16:17:53Z

2010

2019-07-05T16:17:53Z

Resumen

Evidence from infant studies indicates that language learning can be facilitated by multimodal cues. We extended this observation to adult language learning by studying the effects of simultaneous visual cues (nonassociated object images) on speech segmentation performance. Our results indicate that segmentation of new words from a continuous speech stream is facilitated by simultaneous visual input that it is presented at or near syllables that exhibit the low transitional probability indicative of word boundaries. This indicates that temporal audio-visual contiguity helps in directing attention to word boundaries at the earliest stages of language learning. Off-boundary or arrhythmic picture sequences did not affect segmentation performance, suggesting that the language learning system can effectively disregard noninformative visual information. Detection of temporal contiguity between multimodal stimuli may be useful in both infants and second-language learners not only for facilitating speech segmentation, but also for detecting word-object relationships in natural environments.

Tipo de documento

Artículo


Versión aceptada

Lengua

Inglés

Publicado por

Taylor and Francis

Documentos relacionados

Versió postprint del document publicat a: https://doi.org/10.1080/17470210902888809

Quarterly Journal of Experimental Psychology, 2010, vol. 63, num. 2, p. 260-274

https://doi.org/10.1080/17470210902888809

Citación recomendada

Esta citación se ha generado automáticamente.

Derechos

(c) The Experimental Psychology Society, 2010