Itemaren erregistro erraza erakusten du

dc.contributor.authorLuthra, Sahil
dc.contributor.authorCorreia, João M.
dc.contributor.authorKleinschmidt, Dave F.
dc.contributor.authorMesite, Laura
dc.contributor.authorMyers, Emily B.
dc.date.accessioned2020-09-21T11:33:47Z
dc.date.available2020-09-21T11:33:47Z
dc.date.issued2020
dc.identifier.citationLuthra S, Correia JM, Kleinschmidt DF, Mesite L, Myers EB. Lexical Information Guides Retuning of Neural Patterns in Perceptual Learning for Speech. J Cogn Neurosci. 2020;32(10):2001-2012. doi:10.1162/jocn_a_01612es_ES
dc.identifier.issn0898-929X
dc.identifier.urihttp://hdl.handle.net/10810/46162
dc.descriptionPosted Online August 31, 2020es_ES
dc.description.abstractA listener's interpretation of a given speech sound can vary probabilistically from moment to moment. Previous experience (i.e., the contexts in which one has encountered an ambiguous sound) can further influence the interpretation of speech, a phenomenon known as perceptual learning for speech. This study used multivoxel pattern analysis to query how neural patterns reflect perceptual learning, leveraging archival fMRI data from a lexically guided perceptual learning study conducted by Myers and Mesite [Myers, E. B., & Mesite, L. M. Neural systems underlying perceptual adjustment to non-standard speech tokens. Journal of Memory and Language, 76, 80-93, 2014]. In that study, participants first heard ambiguous /s/-/∫/ blends in either /s/-biased lexical contexts (epi_ode) or /∫/-biased contexts (refre_ing); subsequently, they performed a phonetic categorization task on tokens from an /asi/-/a∫i/ continuum. In the current work, a classifier was trained to distinguish between phonetic categorization trials in which participants heard unambiguous productions of /s/ and those in which they heard unambiguous productions of /∫/. The classifier was able to generalize this training to ambiguous tokens from the middle of the continuum on the basis of individual participants' trial-by-trial perception. We take these findings as evidence that perceptual learning for speech involves neural recalibration, such that the pattern of activation approximates the perceived category. Exploratory analyses showed that left parietal regions (supramarginal and angular gyri) and right temporal regions (superior, middle, and transverse temporal gyri) were most informative for categorization. Overall, our results inform an understanding of how moment-to-moment variability in speech perception is encoded in the brain.es_ES
dc.description.sponsorshipThis work was supported by NSF IGERT DGE-1144399, NIH R03 DC009395 (PI: Myers), NIH R01 DC013064 (PI: Myers), and an NSF Graduate Research Fellowship to S. L. The authors report no conflict of interestes_ES
dc.language.isoenges_ES
dc.publisherJournal of Cognitive Neurosciencees_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.titleLexical Information Guides Retuning of Neural Patterns in Perceptual Learning for Speeches_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.holder© 2020 Massachusetts Institute of Technologyes_ES
dc.relation.publisherversionhttps://www.mitpressjournals.org/loi/jocnes_ES
dc.identifier.doi10.1162/jocn_a_01612


Item honetako fitxategiak

Thumbnail

Item hau honako bilduma honetan/hauetan agertzen da

Itemaren erregistro erraza erakusten du