by Jerry Treadway, Professor of Education
San Diego State University
Summary
In a recent paper that has been broadly disseminated via the Internet,
Moustafa and Land have reached erroneous conclusions about Open Court
Publishing's reading series, Collections for Young Scholars. Chief among
them are: (1) that Open Court's reading program, Collections for Young
Scholars, was adopted by the California State School Board despite
recommendations to the contrary by the state's textbook adoption committee; (2)
that, more generally, California's recent policy and legislation on reading
instruction were unduly influenced to favor the Open Court curriculum by the
results of a single study conducted in Houston and sponsored by NICHD (Foorman
et al. 1998); and (3) that the Open Court curriculum produces inferior reading
outcomes as compared with a variety of other commercial programs.
In this paper, all three allegations are refuted. Contrary to Moustafa and
Land's claims: (1) Open Court was approved at every stage in California's 1996
textbook adoption process; (2) the NICHD Houston study post-dated, and therefore
had no causal effect on California's reading initiative, and (3) when several
serious flaws in Moustafa and Land's analysis are corrected, the Open Court
schools are shown to significantly outperform otherwise comparable non-Open
Court schools.
Introduction
Moustafa and Land's paper,
"The Research Base of Open Court and its Translation into Instructional Policy,"
recently disseminated over the Internet has received wide-spread but undeserved
attention. In their article, Moustafa and Land have made serious allegations
about the legitimacy and ethics of the process through which Open Court's
reading program, Collections for Young Scholars was adopted in California
and about the program's instructional adequacy.
As background, during the 1990s, the public was repeatedly confronted with
hard data from the National Assessment of Educational Progress (NAEP 1994, 1995)
indicating that many children in U.S. schools could not read well enough to
comprehend or learn from grade-level materials. Over the same period of time,
however, reading researchers formally published an array of scientifically
validated evidence that, given proper instruction, virtually every healthy child
can learn to read. It is within this context that the Open Court program became
topical. Through both rigorous scientific studies (e.g., Foorman, et al. 1998)
and the separate experiments of a number of school districts around the country
(e.g., Sacramento, CA; Baltimore, MD; Ft. Worth, TX), implementation of the Open
Court program has been shown to produce marked improvement in students' reading
achievement. Moreover, in both the prestigious and carefully vetted reports of
the National Research Council (1998) and the National Reading Panel (2000) and
in curriculum evaluations of a number of other authors and organizations (e.g.,
AFT), the Open Court curriculum has gained special recognition for its coverage
of those instructional components that research has found most critical to
children's reading success. Arguably as a result of such reports, the Open Court
program is increasingly recommended to schools seeking to improve their reading
outcomes.
In short, if Moustafa and land were correct in their allegations, their paper
would be well warranted, Yet, the evidence shows that Moustafa and Land are
wront in their message and allegations, and that their electronic broadcast of
this paper amounts to a grievous disservice to all who are earnestly concerned
with how best to help children learn to read. In the present paper it will be
shown that Moustafa and Land's thesis is seriously flawed in both information
and argument.
The remainder of this paper will present excerpts from Moustafa and Land's
paper (in Bold, with page numbers corresponding to those in the pdf file
on Moustafa's website), followed by discussion of the errors embodied in each.
Because Moustafa and Land's paper is so thoroughly riddled with misinformation,
it was possible to respond only to selected points. The reader is therefore
warned not to assume the correctness of any porton of Moustafa and Land's text
that are not excerpted or addressed herein.
(1) Moustafa & Land, p. 1:
In 1996, California's Reading/Language Arts Textbook Adoption Committee,
composed of qualified reading/language arts teacher specialists, recommended
that Open Court's Collections for Young Scholars not [emphasis
theirs] be placed on the state's textbook adoption list.
The process through which publisher materials were reviewed for California's
1996 Reading/Language Arts adoption included two evaluation committees and one
adoption board, namely, the State School Board. The first evaluation committee,
the Instructional Materials Evaluation Panel (IMEP), was made up of the
"qualified reading/language arts teacher specialists" to whom Moustafa and Land
allude. On the basis of their evaluation, the IMEP recommended, in the
affirmative, that Open Court be adopted. The second committee to
evaluate the materials was the Curriculum Development and Supplemental Materials
Commission (Curriculum Commission). The Curriculum Commission also recommended,
in the affirmative, that Open Court be adopted. Following is the
summary evaluation of Open Court that the IMEP and the Curriculum Commission
presented to the State School Board, as documented by the California Department
of Education:
Collections for Young Scholars develops independent and
self-motivated critical thinkers who reflect on, discuss, and write about
their reactions to rich literature. The program is integrated, sequentially
developed, and increasingly complex and demanding. Students regularly are
asked to confer with peers, develop questions, respond in a variety of ways,
and make connections across selections and other curricular areas. Support is
provided across all grade levels for skill development, including development
of phonemic awareness, and phonetic skills at the early grades. Little
evidence was noted of the availability of literature in the five most common
languages in California other than English. A strength of this program lies in
the activities, materials, and strategies that treat students as capable
learners and in those integrated assessment activities that are closely
related to classroom instruction. (Resource Evaluation: Reading/Language Arts
and English as a Second Language Adoption Report: Review of Instructional
Resources: Kindergarten Through Grade Eight, 1997, p. 39). Note that, though
not published for wide distribution by the state printing house until 1997,
this report was submitted to the Board of Education during July, 1996.)
In other words, and contrary to Moustafa and Land's claim, in voting to
include Open Court on California's list of adopted programs, the State School
board was accepting the recommendation of both of the committees, the
IMEP and the Curriculum Commission, that were responsible for materials
evaluation for the State of California. The foregoing shows that Moustafa and
Land's first major conclusion that Open Court was not recommended by the IMEP is
proven false.
(2) Moustafa & Land, p. 1:
This study...compares the SAT9 reading scores of schools using Open Court
against comparable schools using non-scripted programs.
Moustafa and Land's description of their comparison schools as "non-scripted"
is misleading in that it invites readers to think "scripted" each time they
encounter "Open Court." Open Court is not a "scripted" program. Scripted
materials, by definition, are those that provide teachers with the exact words
through which to explain concepts and conduct activities. For example,
directions to be given to students in the administration of standardized tests
are "scripted." In fact, there are a few reading programs on the market that are
scripted. However, Open Court is not one of them.
Although the activities and materials provided in the Open Court manual are
carefully described and sequenced, the program makes no attempt to limit
communication between teachers and students. To the contrary, much of the
program is built on the premise that open inquiry and reflection by students are
critical both to learning and, as they allow teachers to determine how best to
clarify and enrich concepts, to instruction.
In view of this, more appropriate labels for the schools in contrast in
Moustafa and Land's study would be "Open Court" versus "non-Open Court" schools
or, more specifically, "Open Court" schools versus schools using "other
comprehensive commercial reading programs."
(3) Moustafa & Land, p. 1:
Open Court's Collection for Young Scholars, a reading instruction program
written by an electrical engineer in the 1960s...
The "electrical engineer" in reference, Blouke Carus (Ph.D, Cal. Tech.),
owned Open Court Publishing when Collections for Young Scholars was
authored. He continues to publish children's periodicals including Cricket,
Lady Bug, Baby Bug, Spider, Cicada (mostly fiction) and
Muse (the official children's periodical of the Smithsonian Institute).
However, Dr. Carus did not write the Collection For Young Scholars, nor does he
author any of the other Open Court works he publishes. In the early 1970's, he
decided to develop and publish Open Court in response to what he perceived to be
the poor quality of his children's instructional materials. In March 1996, the
Carus family sold Open Court's classroom reading series to McGraw-Hill.
Over the past thirty years, the authorship of Open Court has changed somewhat
across editions. The senior authors of the 1995 edition, Collections for
Young Scholars, were: Marilyn Jager Adams, Valerie Anderson, Carl Bereiter,
Ann Brown, Joe Campione, Robbie Case, Jan Hirshberg, Anne McKeough, Michael
Pressley, and Marlene Scardamalia. This information is available on the title
page of the Open Court teacher guides or student anthologies. In addition, brief
biographies of the authors are provided in the front matter of the teacher
guides.
(4)Moustafa & Land, p. 1:
The story of Open Court ... begins [with the study conducted by Foorman et al.
(1998) in Houston]. There were many problems with the [Foorman et al., 1998]
research.
There are many errors associated with Moustafa and Land's description of the
design and results of the Foorman et al. (1998) study. For example, in asserting
that the researchers collected data on only "3 to 8 children in each of the four
types of classrooms," the reader might conclude that the study was too small to
permit thorough analysis. Closer examination of the study shows, however, that
the results are based on data collected from 53 classroom teachers, 28 Title 1
teachers, and 285 children in 19 different elementary schools. Further, a glance
at the comprehension data in Table 5 in Foorman et al.'s (1998, p. 50) article
shows that the Open Court students outperformed every comparison group on every
comprehension measure. As possible source of Moustafa and Land's claims to the
contrary, one cell in the table shows a higher standard score (83.1 versus 81.8)
for one of the groups receiving "contemporary reading instruction" over the Open
Court group. However, the table also indicates that 25% of this "contemporary"
group read too poorly to participate in this test.
Among the Open Court children, only 14% were so excused. Moreover, Foorman et
al. (1998, p. 51) found that, overall, "The direct instruction [Open Court]
group approached national average on decoding (43rd percentile) and passage
comprehension (45th percentile) compared with the IC-R ['contemporary'] group's
means of 29th and 35th percentile, respectively." Not only did those children
whom Moustafa and Land term the "contemporary" group score more poorly on
average but further, as reported by Foorman et al. (1998, p. 51), they were
"much more likely" to score at levels conventionally accepted as indicative of
reading disability.
Readers interested in the Houston researchers' own responses to such
criticisms are referred to an article (Foorman et al., 2000) that was recently invited by the
Educational Researcher (which is the association journal of the American
Educational Research Association) and also to the
Houston research team's website (cars.uth.tmc.edu).
(5)Moustafa & Land, p. 1:
For one, the study received significant financial and personnel support from
McGraw-Hill, the publisher of Open Court (Taylor, 1998, pp. 79 & 298).
All instructional materials in the Foorman, et. al, (1998) study,
including the Open Court materials, were provided, free of charge, to the
research project. The Open Court materials were provided by Open Court
Publishing, not McGraw-Hill Publishing which didn't own the Open Court Reading
Program in 1994. As reported by Foorman et al. (1998, p. 40), "Because [the
direct code] group used basal materials that were new to the teachers, a
representative from the publisher spent one day orienting the teachers to the
materials." This level of inservice support is routinely provided by publishers
to schools newly adopting the Open Court curriculum and did not exceed the
inservice opportunities enjoyed by teachers in other conditions in the study.
The insinuation of "significant financial and personnel support" is that the
publisher provided special incentives-financial and perhaps otherwise-to
researchers involved in the study. This is among the unfounded allegations made
in Taylor's (1998) book that is cited by Moustafa and Land. Such charges of
scientific compromise and influence peddling are extremely serious and have been
widely repeated. Yet, there exists not one shred of evidence that points to any
impropriety, financial or otherwise, on the part of the research team or the
publishers of Open Court in the conduct of this study. Indeed, the excerpt below
is from a note that the Houston researchers wrote to Taylor before she
published her book and which she chose not to include.
To reiterate previous responses, the study was not biased in favor of a
particular group, nor is there any relationship between our group and the
current or previous owners of Open Court. In fact, we approached Open Court at
the recommendation of the school district in which we were conducting the
research. We proposed to use DISTAR, but it was not acceptable to the District.
District officials suggested Open Court because the District used Open Court
Math. When we approached Open Court, which at that time was not owned by
SRA/McGraw-Hill, we learned that there was a new edition of Open Court Reading.
We did not have sufficient funds to purchase Open Court because it was not
budgeted in the original proposal. Open Court generously provided us with the
prepublication version of the curriculum, and some assistance with
implementation, but teachers who used Open Court did not receive more intensive
training (Response to
Taylor, 1996).
Open Court Publishing assisted with this project at their own expense, and at
considerable risk of exposure for, after all, the study was conducted in schools
that previously had poor reading outcomes.
It must also be appreciated that what is at stake here extends well beyond
any particulars of the Foorman et al. (1998) study. "Unfortunately," as argued
in the National Research Council's report, "the efficacy of commercial basal
programs is rarely evaluated.... Given the programs' potential for supporting
teachers, as well as teachers' widespread use and even dependence on these
programs in the classroom, such evaluation should be a priority for public
policy" (1998, p. 194). Finally, it needs to be reiterated that the Open Court
program was not acquired by McGraw-Hill until spring of 1996. In 1994, when the
study began, McGraw-Hill's only connection was as owner of the rejected program,
DISTAR. In other words, the suggestion that the contribution of materials and
training to the Houston project was part of a methodical and foresighted
conspiracy by McGraw-Hill to influence the outcome of the study and to use the
results to enhance market share is simply untrue.
(6) Moustafa & Land, p. 2:
How did the NICHD Houston research come to effect [sic] instructional policy in
California?
The Houston study (Foorman, et al. (1998) postdated and thus had no causal
effect on either key policy or legislation. By May, 1996, California's two major
policy documents on reading were completed, or nearly so. Every Child a
Reader: The Report of the California Reading Task Force (1995), had been
published. A follow up reading advisory called Teaching Reading: A Balanced,
Comprehensive Approach to Teaching Reading in Prekindergarten Through Grade
Three, Program Reading Advisory (1997), was largely finished by early May
1996. (As was the case with the adoption report, the State of California
printing house took over six month to publish the document for wide
dissemination). The reading advisory was developed to provide more information
about the components of an exemplary reading program. When completed, it was
adopted by the California Department of Education, the State School Board, and
the Commission on Teacher Credentialing, and was sent to every primary grade
teacher and administrator in California. In part it stated:
To be complete and balanced and to meet the literacy needs of all
students, including English language learners and students with special needs,
any early reading program must include the following instructional components:
phonemic awareness; letter names and shapes; systematic, explicit phonics;
spelling; vocabulary development; comprehension; and higher-order thinking;
and appropriate instructional materials (page 4).
The California legislature's decision to require explicit, systematic phonics
in instructional materials and their decision to require training in explicit
systematic phonics for teachers also had been established prior to May 1996. The
following bills (including chapter numbers-the last two numbers are the year the
bill became law) became law or had finished the legislative process and were
waiting funding prior to May 1996.
Assembly Bill 1504, Burton, (Ch. 764-95) among other things,
required the systematic instruction of spelling.
Assembly Bill 170 Alpert, Burton, Conroy, (Ch. 765-95) (Also
known as the "ABC" bill), required, among other things, that reading and
mathematics, "…include, but not be limited to, systematic, explicit phonics,
spelling, and basic computational skills…"
Assembly Bill 3482, Davis, Johnston (Ch. 196-96), and the Goals
2000 federal legislation that was tied to AB 3482, provided funds for the
professional development of teachers by approved providers. The requirements,
known as "A to K" were to include, but not be limited to, the following:
A. phonemic awareness
B. systematic, explicit phonics instruction (including sound-symbol
relationships, decoding, and word attack skills)
C. spelling instruction
D. diagnosis of reading deficiencies
E. research on how children learn to read
F. research on how proficient readers read
G. structure of the English language
H. relationships between reading, writing, and spelling
I. planning and delivery of appropriate reading instruction based on assessment
and evaluation
J. means of improving reading comprehension, and
K. pupil independent reading of good books and the relationship of that activity
to improved reading performance.
A widely stated complaint is that the legislation privileged phonics to the
exclusion of all else. As generally overlooked by detractors, however, these
skills were but several on a list of requirements aimed at ensuring complete
coverage of the issues and challenges of early literacy instruction, where that
instruction includes language development, comprehension instruction, and wide
reading.
Again, as is clearly seen, instructional policy in reading in California
could not have been initiated and/or developed as a result of the Foorman et al.
(1998) study.
(7) Moustafa & Land, p. 2
The NICHD Houston research was presented to legislators and other policy makers
as "reliable, replicable" research. May, 1996,...Foorman and the Director of
NICHD, Reid Lyon, presented it to the California Senate Education Committee.
Researchers and educators with views consistent with four decades of replicated
research were not allowed to testify.
In May 1996, the California Assembly Education Committee (and not the
"Senate Education Committee"] asked NICHD to provide a review of their
reading research program. Reid Lyon, who is NICHD Branch Chief for Child
Development and Behavior (and not "Director of NICHD"), complied. In doing
so, he asked Barbara Foorman to present data on the Houston study as an example
of how intervention research is designed and conducted. For a legislative
committee to invite expert testimony on a topic before them is not unusual; it
is normal procedure.
Beyond that, Moustafa and Land seem to be suggesting either that the Houston
study (Foorman, et al., 1998) does not qualify as reliable replicable research
or that its results conflict with those of other studies that do so qualify. Yet
this is not the case. Based on an exhaustive search of the last thirty years' of
refereed journals, the National Reading Panel (2000) statistically confirmed
that, across soundly designed experimental studies, and regardless of the type
of less methodical alphabetic instruction received by the comparison groups,
systematic phonics instruction is of significant and inarguable benefit to young
readers. Furthermore, though the Houston study was unusual in its large-scale,
real-world implementation, its results were not. Across the many substantial and
formal investigations of the issue, the conclusion is always the same (Adams,
1990; Anderson, R. C. 1985; Bond & Dykstra, 1967; Chall, 1967; National Reading
Panel, 2000 National Research Council, 1998). Literacy growth depends on
learning to recognize written words easily and accurately, and learning to
recognize words easily and accurately is more quickly and securely achieved
where students receive explicit, systematic instruction in the logic and
conventions of the writing system.
In summary, and contrary to Moustafa and Land's claim, the Foorman et al.
(1998) study did not have an impact on reading policy in California.
(8) Moustafa & Land, p. 2
In December 1996, the California Board of Education overruled the
reading/language arts teacher specialists on its Reading/Lanuage Arts Committee
and added Open Court to California's textbook adoption list. In 1997, the Board
of Education of Sacramento City Unified School District adopted Open Court for
55 of its 60 elementary school, K-6.
Again, as reviewed in point (1) above, the first sentence of this excerpt is
simply not true. Although the second is correct, it carries an insinuation that
warrants address. In particular, the Sacramento City School Board's decision to
adopt the Open Court curriculum was both considered and participatory, involving
more than a dozen public hearings. In addition, the Board's selection of the
Open Court program was supported by both the district's administration and the
Sacramento City Teachers' Association. It enjoyed widespread support within the
district, support that has been justified by subsequent increases in reading
scores.
(9) Moustafa & Land, p.3:
In their analyses of Open Court's success in Sacramento, Helfand and Ellerbee
compared Sacramento's scores in one grade in 1998 to scores in the same grade in
1999. In doing so, they were analyzing teaching, not learning.
As background, the issue here is that the SAT9 scores of Sacramento City
Unified School District rose dramatically from 1998 to 1999. In the above
excerpt, Moustafa and Land are challenging Helfand and Ellerbee for publicly
suggesting that the test score rise was related to the district's
contemporaneous shift to the Open Court reading program. The problem is that
Moustafa and Land's criticism makes no sense. First, their notion that Helfand
and Ellerbee's interpretation of the SAT9 scores pertained to teaching and not
learning is untenable. Student performance is a variable that depends on both
teaching and learning. Second, since the question at hand is whether the
instructional program had a measurable impact on student growth, differences in
teaching would seem of primary interest anyhow.
In any case, Helfand and Ellerbee's logic is wholly sound. That is, it is a
safe assumption that, across the 60 elementary schools of the Sacramento City
Unified District, neither the population of teachers nor the population of
children changed significantly from 1998 to 1999. By implication, then, the
dramatic change in the district's test outcomes must have been due to some other
factor. Given that the only unique and significant district-wide change from
1998 to 1999 was the adoption of the Open Court program, the conclusion that the
new instructional program was a causal factor in students' improved outcomes is
well-warranted.
(10) Moustafa & Land, p. 4
We investigated the effectiveness of Open Court schools against non-scripted
programs in schools serving similar economic groups in the Los Angeles Unified
School District.
Schools differ in many ways beside classroom instruction and many of these
differences influence the impact of classroom instruction. Yet, Moustafa and
Land chose instead to gauge school comparability on the basis of a single
characteristic: The percentage of children receiving free or reduced lunch. This
is surprising because the State of California has already developed a multiple
criteria system for establishing school comparability.
The State of California, as provided by the
Public Schools Accountability Act (PSAA) (Ch. 3/1999) has developed both a
"State Rank" and a "Similar Schools Rank." The Statewide Rank provides a ranking
of schools by deciles by grade level of instruction. To develop the Similar
Schools Rank category, the State of California generates separately, for each of
its 8,563 schools, a list of 100 other schools with similar extra-instructional
characteristics. These extra-instructional factors are:
Pupil mobility
Pupil ethnicity
Pupil socioeconomic status
Percentage of teachers who are fully credentialed
Percentage of teachers who hold emergency credentials
Percentage of pupils who are English language learners
Average class size per grade level
Whether the schools operate multitrack year-round educational programs"
When placed together, a school's reading scores and their State Schools Rank
and Similar Schools Rank provide a reliable measure of its academic achievement.
Given all the numbers that Moustafa and Land pulled from the API website, it
is hard to understand how they overlooked these characteristics and scores. Yet,
they chose instead to gauge school comparability on the basis of a single
characteristic: the percentage of children receiving free or reduced lunch.
Regrettably, even on this single variable, Moustafa and Land failed to
constitute their Open Court and non-Open Court samples fairly, the percentage of
children receiving free or reduced lunch across the Open Court schools was 89%
while for the non-Open Court schools, it was only 79%.
(11) Moustafa & Land, p. 4
We limited our study to schools that either (1) used Open Court, not in
combination with another program or (2) one of the four non-scripted programs in
the study. Of these 155 schools, 19 used Open Court. Nine Open Court schools
were long-term Open Court schools.
Because Moustafa and Land give no indication of how they ascertained a
school's use of Open Court, all 19 were contacted and conversations regarding
their reading programs took place. Of Moustafa and Land's 9 "long-term Open
Court schools," 7 began using Open Court on a limited basis in 1987 as part of a
program called the "Ten Schools Program." The other two schools (116th St. and
118th St.) in Moustafa and Land's "long-term" Open Court sample did not join the
"Ten Schools Program" (now called "Ten plus Two") and begin using Open Court
until the 1998-9 school year. Noting that Moustafa and Land's analysis is based
on Spring 1999, SAT9 scores, these last two schools do not qualify as "long
term" Open Court users.
By contacting the schools, it was confirmed that Open Court has been used
selectively, if at all, in many of the nineteen schools mentioned by Moustafa
and Land. Only schools in the original Ten Schools Project have consistently
used Open Court and then usually only in grades 1 and 2. In grades 3-5, several
of the Ten Schools said they used other approaches, the most common called "Core
Literature." Further, Open Court did not produce a comprehensive kindergarten
program until 1995. Since that program was not adopted in California until the
end of 1996, it was not used in any of the schools until 1997.
(12) Moustafa & Land, p. 4
We compared Open Court against the non-scripted programs in two ways: (1)
average second to fifth grade SAT 9 reading increases/decreases at each
school...In the second to fifth grade comparisons, we sorted the schools into
one of three groups: those with increases of 10 or more percentile points
between second and fifth grade, and those with decreases of 10 or more
percentile points between second and fifth grade.
For purposes of evaluating the relative educational benefits of one school
versus another, no informational value whatsoever is gained by comparing the
numbers obtained by subtracting second from fifth grade 1999 SAT9 scores.
In dismissing Helfand and Ellerbee's interpretation of Sacramento's SAT9
scores, Moustafa and Land argued that proper assessment of learning requires one
"To compare the same-or nearly the same-children," and to do so, they continued,
"we need to compare scores from one grade, one year, to the next grade, the next
year" (p. 3). Presumably, the approach they adopt here, of assessing learning by
subtracting Grade 2 from Grade 5 SAT9 scores, is an effort to turn their
argument to practice. But wait. All of the test scores used in their analysis
are from the 1999 SAT9. The differences on which they have built their
analysis were obtained by subtracting the SAT 9 scores for students completing
Grade 2 in 1999 from those for students completing Grade 5 in 1999. There is no
sense in which Moustafa and Land are comparing the same children across
different grades, as they insisted was the proper procedure. Instead, they are
comparing different children in different grades. Additionally, the
children whose scores are subtracted one from the other are separated by four
school years and come from schools with notoriously high mobility rates.
Moreover, they come from schools where the past four years have witnessed a
dramatic transition in early reading pedagogy.
Even so, Moustafa and Land's analysis is negated by a more fundamental flaw.
In particular, their use of the difference between the Grade 2 and Grade 5 SAT9
scores fails to take into account the schools' overall educational performance.
By their logic, for example, a school that performed in the 99th percentile in
both second and fifth grade would be categorized as showing no sign of
educational quality. To clarify the problem, consider the actual school
comparison in Table 1.
Table 1
Moustafa and Land categorize the educational performance of the Sheridan
School as outstanding and the Kelso School as subnormal
Three sets of mean scores are provided here. The 2nd to 5th grade reading scores, the Statewide Rank, the Similar Schools Rank. The combination of the reading scores and the State and Similar Schools Rank gives a comprehensive view of a school's academic health. The first set of mean scores is for the original ten schools. The second set of mean scores is for the combined Ten Schools, now called the "10+2" schools. The third set of mean scores is for the original ten comparison schools.
Statistical analyses of these data confirm that, even including the two schools that did not join the Ten Schools Project until 1998-9 school year, the Open Court (OC) schools significantly outperformed the comparison schools as measured by their respective Statewide School Rankings, t(17) = 2.24, p = .04, or by their Similar Schools Ranking, t(17) = 3.30, p = .004 (all p values are for two-tailed comparisons). Further, a 2x4 (Project x Grades) analysis of variance affirmed a strong effect of project participation across all grades, F(1,20) = 279.57, p < .0001, as well as an effect of grades, F(3,60) = 5.62, p<.01, but no interaction between them, F(3,60) = 0.68, p=.57. The performance differences are even more pronounced when the original Ten Schools alone are compared to the original comparison schools. The point of these analyses is simply that given an appropriately constituted and a priori comparison sample, sensible measures of educational impact, and sound statistical methods, evidence exists that the Open Court schools outperformed the non-Open Court schools. In sum, Moustafa and Land's third major conclusion that the non-Open Court schools in the Los Angeles Unified School district outperformed comparable Open Court schools is erroneous.
Moustafa and Land's attempt to discredit Open Court is deeply flawed in both scholarship and argument. The facts are wrong and the analyses are misleading. The three major premises made in the paper are proven wrong. The record reveals that Open Court was recommended by all evaluation and adoption boards, that reading policy in California relied on a wide variety of research, not just one study, and that, when comparable schools were analyzed, the Open Court Schools outperformed the non-Open Court schools.
Adams, M. J, (1990). Beginning to read: Thinking and learning about print, Cambridge, MA: MIT Press.
American Fedration of Teachers, (http://www.aft.org/Edissues/whatworks/seven/oc/index.htm).
Anderson, R. C., Hiebert, E. H., Scott, J. A., and Wilkinson, I. A. G. (1985). Becoming a nation of readers: The report of the commission on reading. Champaign IL: Center for the Study of Reading.
Bond, G. L., and Dykstra, R. (1967). The cooperative research program in first-grade reading instruction. Reading Research Quarterly, 2, 5-142.
Resource Evaluation: Reading/Language Arts and English as a Second Language Adoption Report: Review of Instructional Resources: Kindergarten Through Grade Eight. (1997, p. 39). Sacramento, CA: California Department of Education, (The evaluations were finished and the report written by July 1996. It wasn't published in final form by the State of California printing house until 1997.)
Chall, J. S. (1967). Learning to read: The Great Debate. New York: McGraw-Hill.
Collections for Young Scholars. (1995). Peru, IL: Open Court Publishing Company.
Every child a reader: The report of the California Reading Task Force. (1995). Sacramento: California Department of Education.
Foorman, B.R., Francis, D.J., Fletcher, J.M., Schatschneider, C., and Mehta, P. (1998). The role of instruction in learning to read: Preventing reading failure in at-risk children, Journal of Educational Psychology, 90 (1), 37-55.
Moustafa, M, & Land R. (2000). The research base of Open Court and its translation into instructional policy in California, found on the web at (http://curriculum.calstatela.edu/faculty/mmousta).
Moustafa, M. (2000). The research base of Open Court and Its Translation into Instructional Policy in California -- The Data. (http://curriculum.calstatela.edu/faculty/mmousta/Data)
National Assessment of Educational Progress, (1994). The NAEP 1992 Technical Report. Princeton, NJ: Educational Testing Service
National Assessment of Educational Progress, (1995). NAEP 1994 Reading: A First Look-Finding from the National Assessment of Educational Progress (Revised Edition). Washington, DC: U.S. Government Printing Office
National Reading Panel, (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implication for reading instruction. Rockville, MD: National Institute of Child Health and Human Development, National Institutes of Health.
National Research Council. (1998). Preventing reading difficulties in young children. Washington, D.C.: National Academy Press.
Office of Student Integration, (2000). 1999 Academic Performance Index, (API) report TSP/original comparison schools. Los Angeles, CA: Los Angeles Unified School District.
Public Schools Accountability Act, (1999). (www.cde.ca.gov/psaa/api/base/fachsheet.pdf)
Star Data, (1999). (http://STAR.cde.ca.gov/)
Taylor. D. (1998). Beginning to Read and the Spin Doctors of Science. Urbana, Ill: National Council of Teachers of English.
Teaching Reading: A balanced comprehensive approach to teaching reading in preK-grade 3. (1997). Sacramento, CA: California State Board of Education
The author wishes to acknowledge and thank the following colleagues who have provided information and/or editorial support: Marilyn Jager Adams, Barbara Foorman, and Reid Lyon. Hollis Scarborough also is gratefully acknowledged for providing statistical support.