Cross-modal noise compensation in audiovisual words
Date
2017Author
Baart, Martijn
Armstrong, Blair C.
Martin, Clara D.
Frost, Ram
Carreiras, Manuel
Metadata
Show full item record
Baart, M., Armstrong, B. C., Martin, C. D., Frost, R., & Carreiras, M. (2017). Cross-modal noise compensation in audiovisual words. Scientific Reports, 7:42055. Doi: 10.1038/srep42055
Abstract
Perceiving linguistic input is vital for human functioning, but the process is complicated by the fact that the incoming signal is often degraded. However, humans can compensate for unimodal noise by relying on simultaneous sensory input from another modality. Here, we investigated noise-compensation for spoken and printed words in two experiments. In the first behavioral experiment, we observed that accuracy was modulated by reaction time, bias and sensitivity, but noise compensation could nevertheless be explained via accuracy differences when controlling for RT, bias and sensitivity. In the second experiment, we also measured Event Related Potentials (ERPs) and observed robust electrophysiological correlates of noise compensation starting at around 350 ms after stimulus onset, indicating that noise compensation is most prominent at lexical/semantic processing levels.