Show simple item record

dc.contributor.authorKapnoula, Efthymia C.
dc.contributor.authorMcMurray, Bob
dc.date.accessioned2021-11-04T14:39:32Z
dc.date.available2021-11-04T14:39:32Z
dc.date.issued2021
dc.identifier.citationEfthymia C. Kapnoula, Bob McMurray, Idiosyncratic use of bottom-up and top-down information leads to differences in speech perception flexibility: Converging evidence from ERPs and eye-tracking, Brain and Language, Volume 223, 2021, 105031, ISSN 0093-934X, https://doi.org/10.1016/j.bandl.2021.105031es_ES
dc.identifier.issn0093-934X
dc.identifier.urihttp://hdl.handle.net/10810/53704
dc.descriptionAvailable online 8 October 2021.es_ES
dc.description.abstractListeners generally categorize speech sounds in a gradient manner. However, recent work, using a visual analogue scaling (VAS) task, suggests that some listeners show more categorical performance, leading to less flexible cue integration and poorer recovery from misperceptions (Kapnoula et al., 2017, 2021). We asked how individual differences in speech gradiency can be reconciled with the well-established gradiency in the modal listener, showing how VAS performance relates to both Visual World Paradigm and EEG measures of gradiency. We also investigated three potential sources of these individual differences: inhibitory control; lexical inhibition; and early cue encoding. We used the N1 ERP component to track pre-categorical encoding of Voice Onset Time (VOT). The N1 linearly tracked VOT, reflecting a fundamentally gradient speech perception; however, for less gradient listeners, this linearity was disrupted near the boundary. Thus, while all listeners are gradient, they may show idiosyncratic encoding of specific cues, affecting downstream processing.es_ES
dc.description.sponsorshipThis project was supported by NIH Grant DC008089 awarded to BM. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 793919, awarded to EK. This work was partially supported by the Basque Government through the BERC 2018-2021 program and by the Spanish State Research Agency through BCBL Severo Ochoa excellence accreditation SEV-2015-0490.es_ES
dc.language.isoenges_ES
dc.publisherBrain and Languagees_ES
dc.relationinfo:eu-repo/grantAgreement/EC/H2020/MC/793919es_ES
dc.relationinfo:eu-repo/grantAgreement/Basque Government/BERC2018-2021es_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/SEV-2015-0490es_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.subjectSpeech perceptiones_ES
dc.subjectCategorizationes_ES
dc.subjectGradiencyes_ES
dc.subjectCategorical perceptiones_ES
dc.subjectIndividual differenceses_ES
dc.subjectN100es_ES
dc.subjectP300es_ES
dc.subjectEEGes_ES
dc.subjectVisual World Paradigmes_ES
dc.subjectVisual analogue scalees_ES
dc.titleIdiosyncratic use of bottom-up and top-down information leads to differences in speech perception flexibility: Converging evidence from ERPs and eye-trackinges_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.holder© 2021 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND licensees_ES
dc.relation.publisherversionhttps://www.sciencedirect.com/journal/brain-and-languagees_ES
dc.identifier.doi10.1016/j.bandl.2021.105031


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record