July 2017: Value-Added Modeling and Educational Accountability


Everson, K. C. (2017). Value-added modeling and educational accountability: Are we answering the real questions? Review of Educational Research, 87(1), 35–70.

Summary by Dr. Pat Taylor

Overview

School accountability is an important ongoing topic in the national discussion surrounding our educational system. Since the passage of the No Child Left Behind Act in the early 2000s, it has taken on ever more importance in this discussion. A key aspect of No Child Left Behind was the expectation that educational improvements would be achieved by tracking student outcomes as part of an overarching accountability system for schools, administrators, and teachers. One version of this system is the value-added (VA) model. The VA model attempts to determine the influence that a teacher and/or school has had on student growth in a school year. Several states currently require that VA models be used, at least in part, to evaluate teachers. These evaluations often have implications for salaries and employment status. VA models employ sophisticated statistical analyses that make several assumptions. Given the high stakes associated with the use of VA models, it is imperative that any violations of the underlying assumptions be fully understood. Only then can the validity of VA-guided decisions be fully evaluated. To that end, Everson (2017) conducted a review of literature that evaluated the findings of existing literature related to VA models.

Everson included 99 research articles that investigated methodological issues related to the use of VA models for educational accountability. Articles focused on policy issues related to VA use were excluded; the review focused on articles related to model choice, assumptions and interpretation, or measurement issues. Among the articles identified for inclusion in the study, 17 articles focused on reviews of VA modeling issues (review articles) and the remaining 82 presented original research regarding VA models (research articles). Everson used the review articles to create a table (Table 1 in Everson’s article) of areas of pressing need for research regarding VA models. This table includes 15 topics that represent a broad view of concerns regarding VA models. Among the authors of the review articles, there was a high degree of agreement regarding the justification or impact of the 15 concerns. The research articles were then aligned with the 15 concerns. This allowed the author to present an explanation and the current state of research for each of the concerns.

Key Findings

One key finding from this study is that there is limited alignment between what have been perceived as important issues (review articles) and what was actually researched (research articles). Everson suggests that the misalignment reflects the difference between what can be studied and what should be studied. This misalignment provides a clear view of areas that would benefit from future research.

Other key findings relate to several major issues with the use of VA models. A sampling of these includes the choice of predictors, problems with simultaneous estimation of teacher and school effects, and instability of teacher effects. For the choice of predictors, it may be necessary to clarify the intent of the model. Kelly and Downey (2010) suggest that appropriate comparisons of teachers will depend on the appropriate selection of predictor variables. Everson raises the issue that without determining the intent of the model, future uses of VA may be guided more by the convenience of available data than by purposeful selection of predictor variables.

The simultaneous estimation of teacher and school effects presents a problem. It seems important to achieve this simultaneous estimation so that school effects are not absorbed by teacher effects. However, when both teacher and school effects are simultaneously estimated, teacher comparisons are valid for only teachers from the same school. Based on the current study, it seems that there is no widely accepted method for modeling both teacher and school effects at the same time while allowing teacher comparisons across different schools.

A final issue is that of instability in an estimated effect for a teacher across time or across measures. Whether instability refers to time or tests, there was consensus among the research articles that VA models suffer from some level of instability. It is not clear whether this instability reflects measurement of a malleable construct or teacher effectiveness or whether it is a problem of the model. Several other issues are identified and explored in the paper.

Recommendations

From this review of literature, the author derived three pressing problems that in turn serve as recommendations for future research. First, the misalignment between the concerns of theorists and the work done by researchers leaves gaps in our understanding of VA models. Future research needs to explore those less understood concerns. Second, the existing research has primarily focused on individual areas of concern in isolation. The potential for interactions between these areas of concern has been largely ignored. Although, when multiple areas of concern were explored in the same study, an interaction between two distinct areas was found, suggesting that other interactions may exist and provide insight into the functioning of VA models. Future research needs to include multiple areas of concern and explore interactions that may exist. Finally, many of the questions regarding VA models cannot be answered until the purpose of the models is adequately defined. This will require answering several philosophical questions, including exploring ideas like the definition of good teaching or whether comparisons of teachers across different tests and/or grade levels are meaningful. Fundamentally, one must include that we need to better identify what we want to achieve with VA models before we can successfully determine how well they do that.

Reference

Kelly, A., & Downey, C. (2010). Value-added measures for schools in England: Looking inside the “black box” of complex metrics. Educational Assessment, Evaluation and Accountability, 22, 181–198. doi:10.1007/s11092-010-9100-4