Oct 15, 2010 by Andrew Ostarello

In 2007, Scientific Learning introduced Reading Progress Indicator, or RPI for short.  RPI is an individually administered, computer-based assessment for reading and language skills.  We will review the key features of RPI, demonstrate its close relationship to a wide array of high-stakes reading tests, and show how it can be used to forecast future district reading success.

When we were building RPI, we wanted an assessment that would achieve the following four goals:

  1. Be an individually-administered computerized assessment.
  2. Be short and easy to administer.  We wanted a test that took between 30 and 40 minutes to complete.
  3. Cover key reading and language skills: phonological awareness, decoding, vocabulary, and comprehension.
  4. Quickly and reliably detect improvements after using Fast ForWord products.

RPI achieves all four of these goals.

If we look at the academic calendar year, we can see that most state reading assessments happen once a year, in the spring.  Though they are important for measuring student reading growth, they are infrequent.

RPI is a good supplement to the picture of student reading growth.  With a pre-test in the fall, and subsequent tests after completing each product, teachers can get more information to answer critical instruction questions:

Who’s currently succeeding? Who’s on track with their reading growth? And finally, who’s likely to do well on the state reading assessment? Now, that third question can only be answered if RPI measures reading ability in a similar way to those state reading assessments.  Does it?

It turns out it doesalign well with state reading assessments.  Here’s an example from Florida.  The Florida Comprehensive Assessment Test, or FCAT, has a developmental scale score which spans all grade levels.  RPI correlates positively with this FCAT score.  The data shows the correlation is 0.51.  Of course, it’s not perfect, but 0.51 is a pretty strong correlation, and it suggests that RPI measures the same kinds of reading skills that the FCAT measures.

These results are not limited to Florida.  Here are four more tests that have a strong positive correlation to RPI.  The ITBS/ITED tests from Iowa and the ISTEP from Indiana – two more state reading assessments. The Gates-MacGinitie Reading test and the Woodcock-Johnson – both widely used supplemental reading assessments. All of these correlations are well over .5, and all are statistically significant.

So what can be done with these kinds of correlation data? Well, it’s important to realize just how rich this dataset is. We have matched data from over 25,000 RPI Users and data from over 12,000 students who took state assessments and used Fast ForWord products. 

With strong correlations between the two, we can begin to predict student performance on state assessments by looking at the trends in a student’s RPI scores.  Not perfect predictions, of course, but we can build reasonably accurate mathematical models of student growth for a variety of states.