February 2014: A Meta-Analysis of Summer Reading


Kim, J., & Quinn, D. (2013). The effects of summer reading on low-income children’s literacy achievement from kindergarten to grade 8: A meta-analysis of classroom and home interventions. Review of Educational Research, 83, 386–431.

Summary by Dr. Nancy Scammacca

Overview

The gap in reading achievement between low-income students and their peers from higher-income families has been well documented. The disparity appears in early elementary school and gets wider as students progress into middle school. Previous research has shown that low-income children are particularly at risk for a decline in reading skills during the summer months and that higher-income children maintain or grow their reading skills in the summer. The reasons cited for this pattern include a lack of books in the homes of low-income families and the ability of middle- and upper-income families to spend money on educational activities and resources to enrich their children’s reading skills during the summer months.

Classroom- and home-based summer reading programs have been created in an attempt to improve low-income students’ reading ability during summer vacation. Classroom interventions have been shown to be beneficial for students. They typically involve reading instruction provided by teachers and college and graduate students, which can be costly. Home-based programs are a potentially cost-saving alternative. However, they depend more on the willingness of children to initiate reading activities, as child-initiated book reading has been found to be the key mechanism driving reading gains from home-based programs. Specifically, children who read six or more books each summer improve their reading skills significantly more than children who do not.

In an effort to learn more about the effectiveness of summer reading programs, Kim and Quinn surveyed the existing research to determine what types of reading outcomes these programs affect; whether including research-based, best-practice reading instruction in a program makes a difference; and whether low-income children benefit from summer reading programs to a greater extent than their higher-income peers. The researchers expected that summer reading programs would demonstrate wide-ranging benefits and that these benefits would be larger for programs that included research-based instructional practices and for children from low-income families. The researchers did not have a hypothesis about the effectiveness of home-based programs compared to classroom-based ones.

Study Design

Kim and Quinn conducted a meta-analysis to investigate the effects of summer reading programs. A meta-analysis is a “study of studies.” It takes a comprehensive look at research published in a particular area and compares the effects found in each study by using a statistic known as an effect size. Based on the meta-analytic results, researchers can determine the overall effectiveness of programs and what types of programs are more effective than others and for whom. Because a meta-analysis synthesizes results across a large group of studies in a systematic way, it provides results that are better able to inform practice than a single study on the same topic.

An important part of understanding the results of a meta-analysis is understanding the types of studies the researchers included. Kim and Quinn included only studies published after 1998 that did the following:

  1. Researched the effectiveness of a home-based or classroom-based summer reading program with students in kindergarten to grade 8 in the United States or Canada
  2. Measured the effects of the program on one or more tests of reading skills
  3. Used an experimental or quasi-experimental design that compared students who participated in the program to students who did not

A total of 41 programs met all three criteria and were included in Kim and Quinn’s analysis. Most of the programs included students in kindergarten to grade 5. About two-thirds were classroom-based, and low-income students were the majority of the participants in 60% of the programs.

Kim and Quinn reported their results by using the Cohen’s d effect size statistic. Cohen’s d is calculated by subtracting the average score of the comparison group (those who didn’t participate in the program) from the average score of the treatment group. The difference is then divided by the combined standard deviation of the scores for the two groups. The result tells us how much the treatment group mean has moved away from the comparison group mean at the end of each study included in the analysis, using a common metric that allows for the effects of each study to be combined. The more separation, the greater the difference between the two groups on the reading outcome measure of interest and the bigger the effect size. The Cohen’s d effect sizes from each study can be averaged together in a way that gives more weight to studies that had larger groups of participants. These weighted average effect sizes are reported with a 95% confidence interval, which is a range of values within which the true effect most likely lies.

Key Findings

Across the 41 summer reading programs, Kim and Quinn found an average Cohen’s d of 0.10 (95% confidence interval = 0.04, 0.15), indicating an overall positive relationship between participating in a summer reading program and overall reading achievement that is significantly different from zero. Next, they looked at the effect of summer reading programs on specific reading skills. The effect was d = 0.23 for reading comprehension only and d = 0.24 for fluency and decoding combined. These effects were all significantly different from zero. The results also indicated the following:

  • The size of effects from home-based and classroom-based programs were similar on each of the reading skills examined in the analysis.
  • Within the classroom-based programs, those that implemented research-based instructional practices had effects ranging from d = 0.25 to d = 0.63, and programs that did not implement these practices had effects ranging from d = 0.05 to d = 0.18.
  • There were no differences in the effectiveness of classroom-based programs based on the number of hours of instruction per day, the total number of hours of services provided, the number of students per class, whether instructors were provided with training prior to implementing the program, and whether instructors were certified teachers or college or graduate school students. Only 12 studies reported all of the above pieces of information. Of these 12 studies, the 5 programs that had fewer than 13 students per class, that provided 4 to 8 hours of instruction per day, and that provided a total of 70 or more hours of instruction had a larger average effect (d = 0.25) than the 7 programs that did not meet all of these criteria (d = 0.03).
  • For the studies where the majority of students in the program were from low-income families, the effects of participation ranged from d = 0.10 for overall reading achievement to d = 0.33 for reading comprehension. All effects were significantly different from zero. For studies where students were from a mix of different income levels, the only effect that was significantly different from zero was decoding and fluency (d = 0.27). The average effect size for reading comprehension was significantly greater for the studies with mostly low-income students than for the studies with students from a mix of income levels.

To gain further insight into the reasons why the summer reading programs were more effective for low-income students, Kim and Quinn looked at studies where researchers had calculated results separately for low-income students and students with a mix of income levels. In the seven studies that provided this information, low-income students benefited significantly more from summer reading programs than other students.

Kim and Quinn also looked at the change in reading achievement from spring of one school year to fall of the following school year for low-income and mixed-income students who were in the studies’ control groups, thus not receiving any summer reading intervention program. The results indicated that without summer reading intervention, low-income students’ reading achievement showed no change (d = 0.05) and the reading ability of students from mixed-income backgrounds grew during the summer (d = 0.26). This finding matches those of other studies that have shown that in the absence of an intervention, the gap in reading ability between low-income and higher-income students widens during the summer months.

Recommendations

Recommendation 1: Summer reading programs should be encouraged, as they are a critical means of closing the achievement gap between low-income and higher-income students.

This meta-analysis provides further evidence of the importance of providing educational resources to low-income children during the summer months. When begun early in a child’s education, such programs could conceivably prevent an achievement gap from forming, thus eliminating the need for more costly interventions to remediate reading difficulties later. Summer reading programs are especially important because they demonstrate a meaningful impact on reading comprehension in addition to decoding and fluency. Improving reading comprehension is the ultimate goal of reading interventions, and remediating comprehension deficits is especially important for students as they move from the early-elementary to the upper-elementary and middle school grades.

Recommendation 2: Home-based summer reading programs should be explored, as they can be as effective as classroom-based programs while costing substantially less to implement.

Fewer home-based than classroom-based summer reading programs met Kim and Quinn’s criteria for inclusion in this meta-analysis. The ones that were included produced similar effects to classroom-based programs, with students making measurable gains on measures of reading comprehension and decoding and fluency. Kim and Quinn point out that home-based summer programs could be enhanced by being coupled with a parent training session held at the child’s school prior to the summer break. This session could strengthen the connection between the child’s home and school environments and provide an opportunity for parents to learn research-based instructional practices to implement when reading with their children during the summer (e.g., this parent read-aloud routine is an excellent research-based resource for parents interested in maximizing the benefits of reading with their children). Adding such a session to a home-based program would cost substantially less than hiring instructors to implement a classroom-based program during the summer. Further research is needed to determine how to ensure that home-based programs will achieve the same levels of fidelity of implementation and dosage of the intervention as classroom-based programs, where fidelity and dosage are easier to monitor.

Recommendation 3: All summer reading programs should be based on instructional practices that research has shown are highly effective.

Among the classroom-based summer reading programs included in this meta-analysis, the ones with the strongest effects, and with particularly large effects on reading comprehension, were those that implemented instructional practices based on robust research findings. Resources such as the What Works Clearinghouse and the National Reading Panel report Teaching Children to Read (2000) highlight practices that have been demonstrated to be effective across multiple high-quality reading research studies. By accessing these resources, designers of summer reading programs can easily learn about and implement the instructional practices that will make the biggest difference for students. Kim and Quinn also found that the most effective classroom programs were the most resource-intensive (defined as those with small class sizes, 4–8 hours of daily instruction, and more than 70 total hours of instruction). Further research is needed to determine whether summer reading programs with this level of intensity provide sufficient gains in reading ability to justify their high costs. This meta-analysis didn’t provide enough detail on the home-based programs to address similar questions about dosage or ways to ensure or increase adherence in these programs.

Reference

National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of scientific research literature on reading and its implications for reading instruction. Washington, DC: National Institute of Child Health and Human Development.