Generalization From Newly Learned Words Reveals Structural Properties of the Human Reading System
Date
2017Author
Armstrong, Blair C.
Dumay, Nicolas
Kim, Woojae
Pitt, Mark A.
Metadata
Show full item record
Armstrong, B. C., Dumay, N., Kim, W., & Pitt, M. A. (2017). Generalization from newly learned words reveals structural properties of the human reading system. Journal of Experimental Psychology: General, 146(2), 227-249. http://dx.doi.org/10.1037/xge0000257
Abstract
Connectionist accounts of quasiregular domains, such as spelling–sound correspondences in English,
represent exception words (e.g., pint) amid regular words (e.g., mint) via a graded “warping” mechanism.
Warping allows the model to extend the dominant pronunciation to nonwords (regularization) with
minimal interference (spillover) from the exceptions. We tested for a behavioral marker of warping by
investigating the degree to which participants generalized from newly learned made-up words, which
ranged from sharing the dominant pronunciation (regulars), a subordinate pronunciation (ambiguous), or
a previously nonexistent (exception) pronunciation. The new words were learned over 2 days, and
generalization was assessed 48 hr later using nonword neighbors of the new words in a tempo naming
task. The frequency of regularization (a measure of generalization) was directly related to degree of
warping required to learn the pronunciation of the new word. Simulations using the Plaut, McClelland,
Seidenberg, and Patterson (1996) model further support a warping interpretation. These findings
highlight the need to develop theories of representation that are integrally tied to how those representations are learned and generalized.