Can leaders trust the recommendations of the What Works Clearinghouse? Are they valid?
The above text on quantitative analysis raises questions about the quality of the WWC's reviews (see Chapter 3). Specifically the text criticizes WWC for validating the research behind 'Success for All' when there was lots of published contrary evidence, and for concluding that 'charter schools' are more effective than 'traditional public schools' despite the tiny Effect Sizes (ESs). Are these problems anomalies—or are there deep seated problems with WWC's recommendations?
An amazing recent article suggests that the problems with the recommendations of the WWC are deep seated. Ginsburg and Smith (2016) examined the evidence for all the math programs certified by the What Works Clearinghouse as having evidence of effectiveness as provided by the most rigorous “gold standard” research design—Randomized Controlled Trials (RCT). They reviewed all 18 math programs that had been certified by WCC which contained 27 approved RCT studies. They found 12 potential threats to the usefulness of these studies, and concluded that “…none of the RCT’s provides useful information for consumers wishing to make informed judgments about what mathematics curriculum to purchase."
One of the key problems across the What Works Clearinghouse math studies was that where Ginsburg and Smith (2016) were able to determine the error effects of a threat, the error generated by even one of those threats was at least as great as the Effect Size favoring the treatment group. In others words, the 'gold-standard' of research is not so golden.
- Leaders cannot not trust the recommendations of the What Works Clearinghouse (WWC). Leaders need to conduct their own due diligence using the techniques in the book—both on research in general and on the recommendations of the WWC.
- The fact that the WWC requires the most rigorous research methodology and statistical evidence, and there are still a dozen potential "threats" to the results, suggests that he real world of practice is too complex for the traditional experimental approach and its reliance on relative measures of performance. In other words, it is virtually impossible to establish full internal validity in applied experimental research regardless of how rigorous the research standards are. (Simpler alternatives for assessing the effectiveness of interventions are discussed in Chapter 5 of my methodology book.)
Ginsburg, A., & Smith, M.S., (2016). Do randomized control trials meet the “Gold Standard”? A study of the usefulness of RCTs in the What Works Clearinghouse. The URL to this article is: