May 3, 2000
We would like to correct several errors in Coles’ (2000) account of our 1998 article
published in the Journal of Educational Psychology.
Coles claims that students at the control school were not tutored and that we
dropped the control school in the JEP article because the school was too high
poverty and the students were "tough." (p. 33) This claim is incorrect. Students at
this unseen control were tutored. The school was never dropped, but, in fact,
appears in the JEP article. Critics, including Coles and D. Taylor have
misrepresented the purpose of this group in our study. In fact, its role in the design
was to serve as a control on the professional development that was provided to
teachers who delivered the three study curricula. That is, teachers at this control
school participated in the District’s usual staff development rather than the
professional development provided by our research staff.
Coles criticizes our use of "a less-stringent standard" (p. 33, bottom) to point out
that group differences on the Woodcock-Johnson Passage Comprehension (PC)
favoring the Direct Code group were meaningful. Coles is referring to our use of
effect sizes - the most widely used statistic for reporting the strength of statistical
effects. The effect size on PC for the direct code vs. whole-language contrast
was .76. Effect sizes of .20 are typically considered weak; .50 moderate; .80 and
larger are considered strong. Thus, the effect size of .76 is fairly large and is of
practical significance. Coles’ statement implies that we were misrepresenting the
data.
Coles’ recalculation of our Woodcock-Johnson data: In the spirit of professional
collegialism, we sent Coles a disk of the data for the 1998 JEP study when he
requested our data. The way in which he seemed to manipulate the PC means in
his commentary to Education Week (January 27, 1999) is referred to in our
response published in Education Week (April 21, 1999) and detailed on our website
(http://cars.uth.tmc.edu). A copy of this response is attached. At that time, Dr.
Coles’ would not share his calculations with us. Now that we have access to his
calculations from the book, we see that he did not delete data based on classroom
means, but rather deleted data from all classrooms in a given curriculum in a
particular grade at a particular school based on the school-level mean. In
particular, Coles’ removed the highest scoring Direct Code school in Grade 1 (2
classrooms with 5 students in each) and the lowest scoring school in the Implicit
Code instructional group in Grade 2 (6 classrooms with 1, 2 or 3 students in each).
He then "eyeballs" the remaining school means without regard for the number of
classrooms involved in computing those means, or the number of students involved
in computing the classroom level means that make up the school means. Coles
gives all school means equal weights and, in so doing, dismisses the low scores
from the Implicit Code school with the most students.
Coles further claims (p. 35) that the Direct Code group’s higher PC scores
came from two classrooms in a school in which the scores of the students were
"uncharacteristically uniform, unlike the considerably varied student scores in all
other classrooms and schools." This statement is not correct. The two smallest
classroom standard deviations were 1.0 and 4.9 for Implicit Code, as compared
to 2.6 and 4.8 for Embedded Code, and 4.8 and 5.8 for Direct Code. If anything,
the Implicit Code classroom with a smaller standard deviation is more
"uncharacteristically uniform." We disagree with his emphasis on school means
over classroom means. But given his desire to focus on school means, we note
his failure to emphasize the differences in outcomes in the one school where both
Direct Code and Implicit Code were used. Specifically, in that one school, the
two Direct Code classrooms had a combined average of 100.4 in Grade 1,
whereas the two Implicit Code classrooms had a combined average of 93.4. In
Grade 2, the combined average for the two Implicit Code classrooms was 95.8 as
compared to 93.3 for the two Direct Code classrooms. These are clear
examples of Coles’ jaundiced and biased "re-analysis" of the data in the 1998
paper.
Coles misrepresents the word reading differences as levels of attainment rather
than rate. Thus, it’s not that 38% of the whole-language students could read 2.5
words or fewer out of 50. The correct statement is that 38% of the
whole-language students improved in word reading at a rate of 2.5 words or less
per year. This rate estimate has been miscontrued by Coles and by others. We
have emphasized that one cannot interpret this measure in an absolute sense, but
only as it relates to the experimental word list used in the study. It is, however,
perfectly legitimate to compare this rate to the mean rates of growth observed in
the other instructional groups. The results speak for themselves in showing that
many children in the Implicit Code and Embedded Code groups do not change over
the year.
Coles misrepresents the pre-publication history of the 1998 JEP article. For an
accurate accounting, see our response to B. Taylor et al. on our website.
Our 1998 study is only one study out of 30 years of studies that have pointed to the
importance of explicit alphabetic instruction for children at-risk for reading
difficulties (see Fletcher & Lyon, 1999, for a review). We clearly state 9 specific
limitations of the 1998 study in the conclusions. Coles ignores our statement of
limitations and ignores the context in which this study resides.
Personal attack on Foorman and Adams: Marilyn Adams is not lying, as Coles
suggests, when she said that my request to use Open Court "came out of the
blue." (p. 44) Our 1993 grant proposal had indicated that we would use
DISTAR. The school district at the last minute said that we could not use
DISTAR and that if we wanted a DI approach that they recommended Open
Court Reading because they used Open Court Math. So the district directed us to
Open Court and Foorman was not even aware that Open Court had been
rewritten when she made the request. For that matter, she and Adams did not
have any formal or colleagial relationship at that time, knowing each other only
through reputation. A relationship did develop, which was stimulated by their
appointments to the NRC committee that produced the report "Preventing reading
difficulties in young children." Why Coles needs to attribute motive and malign
both Adams and Foorman is mysterious.
Coles is also incorrect in his description of how Phonemic awareness in young
children (Adams, Foorman, Lundberg, & Beeler, 1998) came about. Foorman
had written the Lundberg, Frost, and Petersen (1988) phonological awareness
activities into our proposal to NICHD in 1992-93. Ingvar Lundberg told Foorman
about an English translation of the activities by Adams and Huggins. Foorman
contacted Marilyn Adams to ask for a copy for research purposes. Adams
graciously agreed. We used her translation during the 1993-94 and 1994-95
school years, with teachers giving us feedback that we incorporated into the
activities. People started contacting us for copies of the revised curriculum.
Thus, after this kindergarten research was complete in 1996-97, Adams and
Foorman began to prepare the curriculum for publication. So as to avoid any
perception of conflict of interest, Foorman divested herself of royalties from the
sales of the publication. When Coles contacted Foorman about the publication
date for Phonemic awareness in young children she explained that it was 1998,
long after our kindergarten studies had ended, and that she had divested herself
of royalties. Nevertheless, Coles depicts Foorman as highly unethical (p. 50):
"...I am not certain where the royalties will go, but direct financial benefit from
the book’s publication is not my concern. My concern is about interlocking
research and collaboration that can create conflicts of interest, which in turn can
impair objectivity. Capital in the form of professional capital can be accrued in
various ways, such as through enhancing one’s professional reputation, status,
and influence and by obtaining research and program grants. One way of
earning these kinds of professional gains is through an educational program that is
highly regarded and widely used." Again, what motivates Coles to malign
researchers with whom he simply disagrees?
When the Coles book first came out we disregarded it as so preposterous and personal
that no one would take it seriously. Since then it has shown up, along with Denny
Taylor’s (1998) Spin Doctors of Science, on the desk of officials in school districts where
we conduct research studies. In one district, officials suddenly began reconsidering their
approval for our study because of the suggestions in these books that we were unethical.
The subtitle of Coles book is particularly worrisome from a school district perspective:
"The bad science that hurts children." No matter that Coles never tells us what the good
science is that helps children. The title does sell books and scares non-researchers.
However, in this book (Misreading Reading) the personal attacks are inaccurate,
unprofessional, and unforgivable, and harmful to reading research and the children it
serves.
Sincerely,
Barbara R. Foorman, Ph.D. and Jack Fletcher, Ph.D.