Rapid online assessment of reading ability
Date
2021Author
Yeatman, Jason D.
Tang, Kenny An
Donnelly, Patrick M.
Yablonski, Maya
Ramamurthy, Mahalakshmi
Karipidis, Iliana I.
Caffarra, Sendy
Takada, Megumi E.
Kanopka, Klint
Ben‑Shachar, Michal
Domingue, Benjamin W.
Metadata
Show full item record
Yeatman, J.D., Tang, K.A., Donnelly, P.M. et al. Rapid online assessment of reading ability. Sci Rep 11, 6396 (2021). https://doi.org/10.1038/s41598-021-85907-x
Abstract
An accurate model of the factors that contribute to individual differences in reading ability depends
on data collection in large, diverse and representative samples of research participants. However, that
is rarely feasible due to the constraints imposed by standardized measures of reading ability which
require test administration by trained clinicians or researchers. Here we explore whether a simple,
two-alternative forced choice, time limited lexical decision task (LDT), self-delivered through the webbrowser,
can serve as an accurate and reliable measure of reading ability. We found that performance
on the LDT is highly correlated with scores on standardized measures of reading ability such as the
Woodcock-Johnson Letter Word Identification test (r = 0.91, disattenuated r = 0.94). Importantly,
the LDT reading ability measure is highly reliable (r = 0.97). After optimizing the list of words and
pseudowords based on item response theory, we found that a short experiment with 76 trials
(2–3 min) provides a reliable (r = 0.95) measure of reading ability. Thus, the self-administered, Rapid
Online Assessment of Reading ability (ROAR) developed here overcomes the constraints of resourceintensive,
in-person reading assessment, and provides an efficient and automated tool for effective
online research into the mechanisms of reading (dis)ability.