Show simple item record

dc.contributor.authorYeatman, Jason D.
dc.contributor.authorTang, Kenny An
dc.contributor.authorDonnelly, Patrick M.
dc.contributor.authorYablonski, Maya
dc.contributor.authorRamamurthy, Mahalakshmi
dc.contributor.authorKaripidis, Iliana I.
dc.contributor.authorCaffarra, Sendy
dc.contributor.authorTakada, Megumi E.
dc.contributor.authorKanopka, Klint
dc.contributor.authorBen‑Shachar, Michal
dc.contributor.authorDomingue, Benjamin W.
dc.date.accessioned2021-03-29T13:45:28Z
dc.date.available2021-03-29T13:45:28Z
dc.date.issued2021
dc.identifier.citationYeatman, J.D., Tang, K.A., Donnelly, P.M. et al. Rapid online assessment of reading ability. Sci Rep 11, 6396 (2021). https://doi.org/10.1038/s41598-021-85907-xes_ES
dc.identifier.issn2045-2322
dc.identifier.urihttp://hdl.handle.net/10810/50812
dc.descriptionPublished18 March 2021es_ES
dc.description.abstractAn accurate model of the factors that contribute to individual differences in reading ability depends on data collection in large, diverse and representative samples of research participants. However, that is rarely feasible due to the constraints imposed by standardized measures of reading ability which require test administration by trained clinicians or researchers. Here we explore whether a simple, two-alternative forced choice, time limited lexical decision task (LDT), self-delivered through the webbrowser, can serve as an accurate and reliable measure of reading ability. We found that performance on the LDT is highly correlated with scores on standardized measures of reading ability such as the Woodcock-Johnson Letter Word Identification test (r = 0.91, disattenuated r = 0.94). Importantly, the LDT reading ability measure is highly reliable (r = 0.97). After optimizing the list of words and pseudowords based on item response theory, we found that a short experiment with 76 trials (2–3 min) provides a reliable (r = 0.95) measure of reading ability. Thus, the self-administered, Rapid Online Assessment of Reading ability (ROAR) developed here overcomes the constraints of resourceintensive, in-person reading assessment, and provides an efficient and automated tool for effective online research into the mechanisms of reading (dis)ability.es_ES
dc.description.sponsorshipWe would like to thank the Pavlovia and PsychoPy team for their support on the browser-based experiments. This work was funded by NIH NICHD R01HD09586101, research grants from Microsoft and Jacobs Foundation Research Fellowship to J.D.Y.es_ES
dc.language.isoenges_ES
dc.publisherScientific Reportses_ES
dc.rightsinfo:eu-repo/semantics/openAccesses_ES
dc.titleRapid online assessment of reading abilityes_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.holderOpen Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. © The Author(s) 2021es_ES
dc.relation.publisherversionhttps://www.nature.com/srep/es_ES
dc.identifier.doi10.1038/s41598-021-85907-x


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record