How are visual words represented? Insights from EEG-based visual word decoding, feature derivation and image reconstruction
Data
2019Egilea
Ling, Shouyu
Lee, Andy C. H.
Armstrong, Blair C.
Nestor, Adrian
Ling, S, Lee, ACH, Armstrong, BC, Nestor, A. How are visual words represented? Insights from EEG‐based visual word decoding, feature derivation and image reconstruction. Hum Brain Mapp. 2019; 40: 5056– 5068. https://doi.org/10.1002/hbm.24757
Laburpena
Investigations into the neural basis of reading have shed light on the cortical locus
and the functional role of visual-orthographic processing. Yet, the fine-grained structure
of neural representations subserving reading remains to be clarified. Here, we
capitalize on the spatiotemporal structure of electroencephalography (EEG) data to
examine if and how EEG patterns can serve to decode and reconstruct the internal
representation of visually presented words in healthy adults. Our results show that
word classification and image reconstruction were accurate well above chance, that
their temporal profile exhibited an early onset, soon after 100 ms, and peaked around
170 ms. Further, reconstruction results were well explained by a combination of
visual-orthographic word properties. Last, systematic individual differences were
detected in orthographic representations across participants. Collectively, our results
establish the feasibility of EEG-based word decoding and image reconstruction. More
generally, they help to elucidate the specific features, dynamics, and neurocomputational
principles underlying word recognition.