Show simple item record

dc.contributor.authorBourguignon, Mathieu
dc.contributor.authorMolinaro, Nicola
dc.contributor.authorLizarazu, Mikel
dc.contributor.authorTaulu, Samu
dc.contributor.authorJousmäki, Veikko
dc.contributor.authorLallier, Marie
dc.contributor.authorCarreiras, Manuel
dc.contributor.authorDe Tiège, Xavier
dc.date.accessioned2020-05-21T07:58:55Z
dc.date.available2020-05-21T07:58:55Z
dc.date.issued2020
dc.identifier.citationMathieu Bourguignon, Nicola Molinaro, Mikel Lizarazu, Samu Taulu, Veikko Jousmäki, Marie Lallier, Manuel Carreiras, Xavier De Tiège, Neocortical activity tracks the hierarchical linguistic structures of self-produced speech during reading aloud, NeuroImage, Volume 216, 2020, 116788, ISSN 1053-8119, https://doi.org/10.1016/j.neuroimage.2020.116788.es_ES
dc.identifier.issn1053-8119
dc.identifier.urihttp://hdl.handle.net/10810/43352
dc.descriptionAvailable online 26 April 2020.es_ES
dc.description.abstractHow the human brain uses self-generated auditory information during speech production is rather unsettled. Current theories of language production consider a feedback monitoring system that monitors the auditory consequences of speech output and an internal monitoring system, which makes predictions about the auditory consequences of speech before its production. To gain novel insights into underlying neural processes, we investigated the coupling between neuromagnetic activity and the temporal envelope of the heard speech sounds (i.e., cortical tracking of speech) in a group of adults who 1) read a text aloud, 2) listened to a recording of their own speech (i.e., playback), and 3) listened to another speech recording. Reading aloud was here used as a particular form of speech production that shares various processes with natural speech. During reading aloud, the reader’s brain tracked the slow temporal fluctuations of the speech output. Specifically, auditory cortices tracked phrases (<1 Hz) but to a lesser extent than during the two speech listening conditions. Also, the tracking of words (2–4 Hz) and syllables (4–8 Hz) occurred at parietal opercula during reading aloud and at auditory cortices during listening. Directionality analyses were then used to get insights into the monitoring systems involved in the processing of self-generated auditory information. Analyses revealed that the cortical tracking of speech at <1 Hz, 2–4 Hz and 4–8 Hz is dominated by speech-to-brain directional coupling during both reading aloud and listening, i.e., the cortical tracking of speech during reading aloud mainly entails auditory feedback processing. Nevertheless, brain-to-speech directional coupling at 4–8 Hz was enhanced during reading aloud compared with listening, likely reflecting the establishment of predictions about the auditory consequences of speech before production. These data bring novel insights into how auditory verbal information is tracked by the human brain during perception and self-generation of connected speech.es_ES
dc.description.sponsorshipMathieu Bourguignon has been supported by the program Attract of Innoviris (grant 2015-BB2B-10), by the Spanish Ministry of Economy and Competitiveness (grant PSI2016-77175-P), and by the Marie Skłodowska- Curie Action of the European Commission (grant 743562). Nicola Molinaro has been supported by the Spanish Ministry of Economy and Competitiveness (grant PSI2015-65694-P), the Agencia Estatal de Investigaci on (AEI), the Fondo Europeo de Desarrollo Regional (FEDER) and by the Basque government (grant PI_2016_1_0014). Mikel Lizarazu has been supported by the Agence Nationale de la Recherche (grants ANR-10-LABX-0087 IEC and ANR-10-IDEX-0001-02 PSL). Xavier De Ti ege is Post-doctorate Clinical Master Specialist at the Fonds de la Recherche Scientifique (F.R.S.-FNRS, Brussels, Belgium). This research is supported by the Basque Government through the BERC 2018–2021 program and by the Spanish State Research Agency through BCBL Severo Ochoa excellence accreditation SEV-2015-0490. The MEG project at the CUB H^opital Erasme is financially supported by the Fonds Erasmees_ES
dc.language.isoenges_ES
dc.publisherNeuroImagees_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/PSI2016-77175-Pes_ES
dc.relationinfo:eu-repo/grantAgreement/EC/H2020/MC/743562es_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/PSI2015-65694-Pes_ES
dc.relationinfo:eu-repo/grantAgreement/MINECO/SEV-2015-0490es_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.subjectReadinges_ES
dc.subjectSpeech perceptiones_ES
dc.subjectSpeech productiones_ES
dc.subjectConnected speeches_ES
dc.subjectCortical tracking of speeches_ES
dc.subjectMagnetoencephalographyes_ES
dc.titleNeocortical activity tracks the hierarchical linguistic structures of self-produced speech during reading aloudes_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.holder© 2020 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND licensees_ES
dc.relation.publisherversionwww.elsevier.com/locate/neuroimagees_ES
dc.identifier.doi10.1016/j.neuroimage.2020.116788


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record