dc.contributor.author | Kapnoula, Efthymia C. | |
dc.contributor.author | McMurray, Bob | |
dc.date.accessioned | 2021-11-04T14:39:32Z | |
dc.date.available | 2021-11-04T14:39:32Z | |
dc.date.issued | 2021 | |
dc.identifier.citation | Efthymia C. Kapnoula, Bob McMurray, Idiosyncratic use of bottom-up and top-down information leads to differences in speech perception flexibility: Converging evidence from ERPs and eye-tracking, Brain and Language, Volume 223, 2021, 105031, ISSN 0093-934X, https://doi.org/10.1016/j.bandl.2021.105031 | es_ES |
dc.identifier.issn | 0093-934X | |
dc.identifier.uri | http://hdl.handle.net/10810/53704 | |
dc.description | Available online 8 October 2021. | es_ES |
dc.description.abstract | Listeners generally categorize speech sounds in a gradient manner. However, recent work, using a visual
analogue scaling (VAS) task, suggests that some listeners show more categorical performance, leading to less
flexible cue integration and poorer recovery from misperceptions (Kapnoula et al., 2017, 2021). We asked how
individual differences in speech gradiency can be reconciled with the well-established gradiency in the modal
listener, showing how VAS performance relates to both Visual World Paradigm and EEG measures of gradiency.
We also investigated three potential sources of these individual differences: inhibitory control; lexical inhibition;
and early cue encoding. We used the N1 ERP component to track pre-categorical encoding of Voice Onset Time
(VOT). The N1 linearly tracked VOT, reflecting a fundamentally gradient speech perception; however, for less
gradient listeners, this linearity was disrupted near the boundary. Thus, while all listeners are gradient, they may
show idiosyncratic encoding of specific cues, affecting downstream processing. | es_ES |
dc.description.sponsorship | This project was supported by NIH Grant
DC008089 awarded to BM. This project has received funding from the
European Union’s Horizon 2020 research and innovation programme
under the Marie Skłodowska-Curie grant agreement No 793919, awarded
to EK. This work was partially supported by the Basque Government
through the BERC 2018-2021 program and by the Spanish State
Research Agency through BCBL Severo Ochoa excellence accreditation
SEV-2015-0490. | es_ES |
dc.language.iso | eng | es_ES |
dc.publisher | Brain and Language | es_ES |
dc.relation | info:eu-repo/grantAgreement/EC/H2020/MC/793919 | es_ES |
dc.relation | info:eu-repo/grantAgreement/Basque Government/BERC2018-2021 | es_ES |
dc.relation | info:eu-repo/grantAgreement/MINECO/SEV-2015-0490 | es_ES |
dc.rights | info:eu-repo/semantics/openAccess | es_ES |
dc.subject | Speech perception | es_ES |
dc.subject | Categorization | es_ES |
dc.subject | Gradiency | es_ES |
dc.subject | Categorical perception | es_ES |
dc.subject | Individual differences | es_ES |
dc.subject | N100 | es_ES |
dc.subject | P300 | es_ES |
dc.subject | EEG | es_ES |
dc.subject | Visual World Paradigm | es_ES |
dc.subject | Visual analogue scale | es_ES |
dc.title | Idiosyncratic use of bottom-up and top-down information leads to differences in speech perception flexibility: Converging evidence from ERPs and eye-tracking | es_ES |
dc.type | info:eu-repo/semantics/article | es_ES |
dc.rights.holder | © 2021 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license | es_ES |
dc.relation.publisherversion | https://www.sciencedirect.com/journal/brain-and-language | es_ES |
dc.identifier.doi | 10.1016/j.bandl.2021.105031 | |