May 2016: Cost-Effectiveness Analysis of Early Reading Programs


Hollands, F. M., Kieffer, M. J., Shand, R., Pan, Y., Cheng, H., & Levin, H. (2016). Cost-effectiveness analysis of early reading programs: A demonstration with recommendations for future research. Journal of Research on Educational Effectiveness, 9, 30–53.

Summary by Dr. Nancy Scammacca

Overview

The cost-effectiveness of educational programs is a timely issue, as many school district budgets increasingly require careful investment of resources to maximize student achievement. Although methods for conducting a cost-effectiveness analysis have been available for decades, they are not often implemented in evaluations of education programs. Instead, research has tended to focus exclusively on effectiveness without giving much attention to the associated costs. In particular, a great deal of rigorous research has been conducted that has identified effective early literacy programs and interventions for struggling readers in kindergarten to grade 3, but researchers have neglected to make clear what the cost of implementation would be for schools hoping to achieve similar results. As a result, most of these programs have not been implemented successfully at scale.

Research that includes information on program costs would clarify the requirements for successful implementation, helping school districts to understand what resources are required and to choose a program that they can implement with fidelity. Adding cost-effectiveness information, which provides the per-student cost for a given unit of improvement on a desired outcome, can help school administrations match their available resources to the most effective program for their students’ needs.

Challenges to Cost-Effectiveness Analysis of Reading Programs

Hollands et al. (2016) sought to provide cost-effectiveness information for early reading programs that were judged to be effective by the What Works Clearinghouse, a review board established by the U.S. Department of Education to screen studies and determine which programs meet requirements for effectiveness based on the validity of the research conducted and the magnitude of the results obtained. Of the 32 early reading programs the What Works Clearinghouse rated as effective, just two could be compared in a cost-effectiveness analysis. Many studies of other programs did not make direct comparisons between two or more programs; instead, they compared one program with existing school instructional practice, also known as “business as usual.” Computing the cost-effectiveness of business-as-usual instruction in a study is difficult, in part because studies typically involve multiple schools with different existing instructional programs and researchers’ description of typical instruction generally lacks sufficient detail to allow for costs to be computed.

Another challenge to determining the cost-effectiveness of early reading programs resulted from the diversity of outcomes researchers assessed in their studies. Cost-effectiveness analysis requires that the same outcome be used to compute the effect of a program. Unfortunately, many different outcome measures are typically used when assessing the effectiveness of a reading program. Some of these measures are designed by researchers and are more similar to the materials used in the program being studied; others are standardized measures that assess students in comparison to a normative group of the same age or grade. Standardized measures tend to be less similar to program materials than researcher-developed measures, requiring students to generalize what they have learned in the program. In addition, studies often measure outcomes in different domains. For example, one study of a program might focus on fluency outcomes and another focus on comprehension outcomes. Comparing the cost-effectiveness of these programs would not be possible because they do not measure their effectiveness within the same domain.

Finally, efforts to conduct cost-effectiveness analysis of early reading programs also were challenged by factors related to the population of students involved in the programs. Students differed between studies in grade, age, and ability level. Some studies provided instruction to groups of students in different grades or combined data from students in different grades when reporting results. Accurate cost-effectiveness analysis cannot be done when comparing studies of different programs that also differ in the population of students treated or the way student data are combined when results are reported.

Cost-Effectiveness Methodology

In conducting their cost-effectiveness analysis, Hollands et al. researched the costs of one implementation of two early reading programs. Given that previous research determined that costs can differ across implementations of the same program, focusing on a specific implementation of two programs was the best way to match a particular effect to the costs associated with obtaining that effect. The two programs they analyzed were the Wilson Reading System and Corrective Reading. Both were included in Torgesen et al.’s (2006) randomized experiment and results for both were reported for third-grade struggling readers who were selected to participate based on the same criteria.

Hollands et al. used the “ingredients method” to determine the costs of implementing each program. This methodology is commonly used in cost-effectiveness studies in education. It involves listing each resource used to provide the program and then calculating the costs of these resources. This approach is more reliable than using a program’s budget as a cost estimate because budgeted expenses often differ from actual expenses. Budgeted expenses typically do not include capital costs (such as the school building) that are used over many years, resources purchased previously that were used in the program, or resources contributed by outside groups. By including these costs, the ingredients approach better captures the true total cost of implementing the program relative to the effect achieved.

To determine the resources used to implement the programs, Hollands et al. first used the details that Torgesen et al. (2006) recorded, other studies of the two programs, and websites and other publications that provided information on the programs. Using this information, Hollands et al. compiled a structured interview form to use in documenting additional details about the ingredients used to implement the programs. The interviews sought to gather more information about the qualifications and experience of teachers and other staff members who implemented the programs, how much time each person involved in the program spent, and costs for items such as materials. Costs associated with Torgesen et al.’s research were not included because they would not be needed to implement the program in the future.

Hollands et al. determined costs for each ingredient identified for each program by using a Center for Benefit-Cost Studies of Education toolkit, which is available online for use without cost. The toolkit facilitates cost-effectiveness analysis and contains average nationwide costs for resources commonly used to implement education programs, including salaries for teachers and other education professionals. Hollands et al. calculated costs for program materials and other supplies by obtaining the information from program vendors or from retail stores. They calculated costs for classrooms on a price-per-square-foot basis for the amount of space used over the period of time needed to implement the program. Though Hollands et al. recommend calculating costs and effects at the site level, they were unable to do so for the two programs in their analysis because only a small number of students were involved at each site and Torgesen et al. calculated effects across all sites combined.

Key Findings

  • The cost per student was estimated at $6,696 for the Wilson Reading System and $10,108 for Corrective Reading.
  • Personnel costs represented more than 90% of the cost of each program. Costs for Corrective Reading were higher in large part because the teachers who implemented the program were more experienced and taught fewer students than those who implemented the Wilson Reading System.
  • Student improvement in terms of gain in alphabetic skills was similar for the two programs, with an effect size of 0.22 for Corrective Reading and 0.33 for the Wilson Reading System in Torgesen et al. (2006). However, given the difference in cost, the Wilson Reading System was more than twice as cost effective. The cost per student for a one standard deviation improvement in alphabetics was $45,945 for Corrective Reading and $20,291 for the Wilson Reading System.
  • In a “what-if” scenario, costs for both programs were compared to determine how the cost-effectiveness results would differ if both were implemented by teachers with 5 years of experience who each taught four groups of five students. In this scenario, the two programs were more comparable in terms of cost-effectiveness for alphabetics.
  • Student gains in fluency were larger for Corrective Reading, with an effect size of 0.27. The effect size for the Wilson Reading System was 0.15, an effect that did not differ significantly from zero. The cost to achieve a one standard deviation improvement in fluency for Corrective Reading was $37,437. Given that the Wilson Reading System did not demonstrate effectiveness in fluency, no cost-effectiveness ratio can be calculated.
  • Costs to implement the two programs also were estimated for implementation as designed by the program developers, which involved a larger dosage of instruction than was done in the 1-year implementation by Torgesen et al. (2006). Total recommended instructional time over 2 to 3 years for the Wilson Reading System is 450 hours, and Corrective Reading recommends 240 hours. At this dosage and the recommended instructional group size, Corrective Reading was nearly $2,000 less expensive per student than the Wilson Reading System. This difference was largely due to increasing the number of students per group in Corrective Reading and more intensive ongoing training requirements for the Wilson Reading System. However, given the larger effect size for alphabetics obtained for the Wilson Reading System, it remained more cost-effective than Corrective Reading for improving alphabetic skills.

Implications and Recommendations for Practice

  • In choosing between the Wilson Reading System and Corrective Reading, one important consideration is the need for student improvement in alphabetics and fluency. If alphabetics is the primary need, the Wilson Reading System is more cost-effective. If gains in fluency are the focus, Corrective Reading is more cost-effective
  • Variables such as teacher qualifications, required training, and instructional group size have a substantial effect on the cost-effectiveness of programs and should be considered in determining which program to implement and how to implement it. Studies of programs that vary these aspects and determine the resulting differences in effects are needed to help school administrators make optimal decisions about how to implement a reading program.
  • More studies are needed that make direct comparisons between available educational programs and document the associated costs of each implementation. These studies are valuable to schools seeking to choose the best program for their students given the available resources.