Saturday, July 2, 2016

New Article by Gene Glass on the Limits of Meta-analysis in Education

Gene glass is one of education's most accomplished and perceptive quantitative methodologist. He is the developer of meta-analysis, the statistical method for consolidating the findings of related studies into a single result.

In a recent article in the journal Educational Researcher*, Dr. Glass notes the following interesting conclusions about meta-analysis.

  • Meta-analysis is unique in that it is possibly the only education born widely-used quantitative methodological tool that has been adopted by medical research. Typically education research seeks to emulate what medical research is doing to gain credibility as a science. In this case, medical research adopted this education methodology and now uses it to an even greater extent then education.
  • Meta-analysis has not produced incontrovertible findings that can lead to education policy.

While the first conclusion is interesting and a coup for education research, the second is a bombshell.  It is rare that the inventor of something points out its limitation after an extended period of use. The problem is that unlike medical research where related studies/drug trials tend to find similar effects, in education the variability of findings within a given meta-analysis is usually much larger than the overall effect. This is yet another problem with relying on small effect sizes in education to drive policy and conclusions that something is effective.

While the existence of large variation within a given meta-analysis is problematic, it also represents an opportunity. One can look inside the accompanying table that lists the characteristics of all the studies in the meta-analysis to identify those studies with the biggest effect sizes and then look at them to see if there are any useful common characteristics. That is the gist of the the section "How to Review a Meta-Analysis" in Chapter 3 of the text. Of course, it may be that the studies with the highest Effect Sizes are at the early elementary grades, or very short term interventions using a measure developed by the researcher, which will be of little help if you are trying to raise the scores of a middle or high school on more standardized measures. However, you never know whether the results can help improve your practice till you look into the most promising studies.

* Glass, G. V. (2016). One Hundred Years of Research Prudent Aspirations. Educational Researcher, 45(2), 69-72.

Wednesday, June 8, 2016

The Importance of Quantitative Data Mining to Improve Practice

One way to use quantitative research to improve practice is to use the simple techniques for critiquing quantitative research in the top journals in terms of its practical significance; primarily in terms of how experimental students did on an actual basis. This is the emphasis in the first part of the alternative text Authentic Quantitative Analysis for Leadership Decision-Making and EdD Dissertations. Another less discussed way of using quantitative data is "Data Mining." While data mining is extensively employed in the business world to improve organizational performance, it also has wide applicability in education. Data mining simply means finding a key metric amidst all the data flowing through and within a school or district to initiate improvement action and to then monitor changes to that metric.

The importance of data mining comes across in a recent story on National Public Radio by Elissa Nadworny entitled:  What One District's Data Mining Did For Chronic Absence.

This story details how the superintendent asked the question: Do we have a serious chronic truancy problem?

This leads to the questions:

  • How should chronic truancy be defined?
  • How to access that data?

The subsequent analysis showed out that there was a huge unaddressed chronic truancy problem—40% of students fell into that category. The rest of the story deals with the steps taken both initially and subsequently to address the problem.

Throughout the improvement effort those involved continued to monitor progress on the metric. Several years later the number of chronic absentees had been cut in half. This case study has the following implications for EdD programs:

  • Data Mining is an important form of quantitative analysis that those in EdD programs are likely to apply in practice,
  • Such analyses do not require advanced statistics at any stage of the process, and
  • If you establish appropriate practices that are actively monitored and iteratively improved upon, within several years you can produce BIG improvements.

In terms of the last point, there is no need to calculate statistical significance or effect sizes. You have produced a large, highly visible improvement that does not require any statistic to tell you if the improvement is significant. It clearly is--and these are the types of improvements schools need to strive for.

At the same time, data mining by itself does not produce improvement. You need to incorporate research findings and local knowledge to develop an appropriate action plan. At the same time, if leaders do not have a data mining perspective/impulse, problems can easily be overlooked. It also becomes easy to overlook that a plan of action to solve a problem is not having the intended effect. In this example, the strategy used by the district had no effect in the first year and they then had to improvise changes to the action plan.

What this means is that quantitative methods courses in EdD programs should spend some time on applied data mining.  Topics could include:

  • Identifying and defining key metrics of improvement,
  • Having students collect data around a metric of their choice for their school(s), 
  • Evaluating the results and developing an initial action plan, and
  • Methods of communicating the quantitative data to the entire community in an easy to understand fashion that can be used to mobilize a broad-scale response to problems.  

The URL for the story is:

Tuesday, May 31, 2016

What is the fundamental Basis of Scientific Thought/Discovery? Implications for the EdD

My work on research methodology talks a lot about the fundamental nature of science and scientific discovery. This is because much of what I read in education about the role of theory and how to interpret the practical implications of research results seems to me to be idealized, narrow, and artificial. Education conceptions of the role of theory and the applications of research outcomes to practice appears to me to have become inbred in our wonderful profession with a rhetoric and philosophy that appears to have evolved in a fashion that is increasingly disjointed from the actual fundamental nature of science. This disjointedness is true for education in general and carries over to how theory, research, and science is presented in EdD programs.

Therefore, in order to understand the real role of theory and quantitative analysis I read a great deal about the fundamental nature of science and scientific discovery as conceived and practiced in the physical and medical sciences—as opposed to relying on prior education research texts and existing discussions about the role of theory in education. As a result, my text on Authentic Quantitative Analysis... has a great deal of original perspective on the role of theory and how to apply quantitative results to leadership practice that differs from existing conventional wisdom in education and in EdD programs.

Right now I am reading one of the most fascinating and unique books on the latest research findings in physics. The book, Seven Brief Lessons on Physics by Carlo Rovelli, is unique in that it is short, poetic, and non-technical—and still manages to explain in easy to understand fashion the most important events and ideas in physics about the nature of the universe we exist in. If you want to understand quantum mechanics, Einstein' thought experiments, etc. in a quick and interesting read this is the book for you.

The insight from this book that sparked this post is what the author claims is the fundamental essence that drives scientific discovery. Is it theory? Is it evidence? NO! The author indicates that:

...before experiments, measurements, mathematics, and rigorous deductions, science is above all about visions. Science begins with a vision. Scientific thought is fed by a capacity to "see" things differently differently than they have previously been seen.

This is why Chapter 4 of my text emphasizes the creative and metaphorical nature of scientific discovery and Part II talks about Design-Based Dissertations as an important option in EdD programs. The latter focuses of envisioning and trying new and unique approaches to solving problems regardless of whether they are grounded in existing academic theory or research.  

Wednesday, May 18, 2016

A Reassessment of the Role of Theory by a Leading Researcher: Implications for EdD Dissertations

David Berliner is one of the top education researchers and helped define the field of educational psychology. He is a former president of the American Educational Research Association, and co-author of the books (among others) of  Educational Psychology, Handbook of Educational Psychology, and Manufactured Crisis: Myth, Fraud, and the Attack on America's Public Schools. Dr. Berliner typically extols the tradition perspective of education research. This is why I was surprised to see him describe the role of theory in a very non-traditional fashion in a recent autobiography (in Education Review) summarizing what he has learned in his long and distinguished career. Dr. Berliner concludes that:
"Theory may be overrated. The journals and the scholarly community value “theory.” But I have done a lot of research on teachers and teaching without much theory to guide me. We deal with the practical in education, and the practical is filled with complexity, some of which is hard to fit into psychological or any other social science theory...
A good question is a good question, and should be pursued. Working from a Piagetian or Vygotskian theory is nice, and thinking about the world from a Freirean position, or asking what would Derrida say, is also to be lauded. But in my research career, a good question sensibly answered is worth its weight in gold. So I have come to believe that dust-bowl empiricism is too often dismissed as inadequate, and that theory in education research is too often over emphasized. I am more impressed with the quality of the question asked and the attempt to answer it, and less impressed with the quality of the theory."  

Implications for EdD Dissertations

Dr. Berliner's statements point to the need to rethink the traditional perspective of the role of theory in an applied program such as the EdD. The question is how to do that. It is clearly appropriate for EdD programs to require/expect students to exhibit knowledge of theory and to demonstrate the ability to apply theory to problems in class assignments and exams. However, should students be required to provide a theoretical rational for the questions they ask in an EdD dissertation or the approach chosen to study the problem? Dr. Berliner's perspective and Chapters 4,5 of the above alternative quantitative methodology text for EdD programs suggest that theory and theoretical justifications should not be required as a basis for EdD dissertation work; though students should have the option for using it as a basis. Rather the focus should be on asking good questions, i.e., ones that have clearly empirical evidence of their importance and/or that they have not been previously asked in adequate fashion. Methodology and approach should be demonstrably unique—unless you are trying to replicate or extend the results from a prior study.     

The URL for this autobiography is 

Friday, May 6, 2016

Are the Recommendations of the What Works Clearinghouse Useful/Reliable for Practitioners?

In 2002 the US Congress established the What Works Clearinghouse in the US Department of Education’s Institute of Education Science. What Works Clearinghouse’s role is to set rigorous quantitative standards based on the best science to provide guidance for practitioners as to what works by assessing the quality of research evidence supporting a given intervention. The What Works Clearinghouse then issues a rating on whether the evidence behind an intervention meets its standard of evidence or does not. The What Works Clearinghouse performs the same function in education that the Food and Drug Administration (FDA) does in medicine.

Can leaders trust the recommendations of the What Works Clearinghouse? Are they valid?  

The above text on quantitative analysis raises questions about the quality of the WWC's reviews (see Chapter 3). Specifically the text criticizes WWC for validating the research behind 'Success for All' when there was lots of published contrary evidence, and for concluding that 'charter schools' are more effective than 'traditional public schools' despite the tiny Effect Sizes (ESs). Are these problems anomalies—or are there deep seated problems with WWC's recommendations?

An amazing recent article suggests that the problems with the recommendations of the WWC are deep seated. Ginsburg and Smith (2016) examined the evidence for all the math programs certified by the What Works Clearinghouse as having evidence of effectiveness as provided by the most rigorous “gold standard” research design—Randomized Controlled Trials (RCT). They reviewed all 18 math programs that had been certified by WCC which contained 27 approved RCT studies. They found 12 potential threats to the usefulness of these studies, and concluded that “…none of the RCT’s provides useful information for consumers wishing to make informed judgments about what mathematics curriculum to purchase." 

One of the key problems across the What Works Clearinghouse math studies was that where Ginsburg and Smith (2016) were able to determine the error effects of a threat, the error generated by even one of those threats was at least as great as the Effect Size favoring the treatment group. In others words, the 'gold-standard' of research is not so golden.


  • Leaders cannot not trust the recommendations of the What Works Clearinghouse (WWC). Leaders need to conduct their own due diligence using the techniques in the book—both on research in general and on the recommendations of the WWC. 
  • The fact that the WWC requires the most rigorous research methodology and statistical evidence, and there are still a dozen potential "threats" to the results, suggests that he real world of practice is too complex for the traditional experimental approach and its reliance on relative measures of performance.  In other words, it is virtually impossible to establish full internal validity in applied experimental research regardless of how rigorous the research standards are. (Simpler alternatives for assessing the effectiveness of interventions are discussed in Chapter 5 of my methodology book.)   

Ginsburg, A., & Smith, M.S., (2016). Do randomized control trials meet the “Gold Standard”? A study of the usefulness of RCTs in the What Works Clearinghouse. The URL to this article is:  

Monday, May 2, 2016

Can Theory Emerge From Practice? Does Scientific Discovery Only Occur at a Young Age?

The book Authentic Quantitative Analysis for Leadership Decision-Making starts with discussions about the nature of science and then uses that as the basis of discussing the relationship between theory and application in science in Chapters 1, 2. The discussions about the relationship between theory and practice in the book are unique and counter-intuitive in terms of how they are usually discussed in EdD programs. Some of the innovative nature of the discussion talks about:

  • the importance of personal theories of action along with academic theories, and 
  • the application of theory to practice does not necessarily mean that the practice will be more effective than one that is not based on academic theory. 

Chapter 4 goes a step further and argues that while the importance of theory is universally discussed as a key to improving practice, this ignores the fact that important academic theory can emerge from a successful practice that was based on an alternative mode of scientific discovery; e.g., accident or metaphor.

This is why I was glad to see a recent article about Iva Babushka, a 90-year old mathematician. This article noted that while applications such as cryptography have emerged from highly abstract number theory, "Conversely, many elegant and aesthetically pleasing mathematical theories have emerged from the most utilitarian applications." In other words, even abstract mathematical theory can emerge from the experience of successful practice.

The bottom line is that EdD programs should encourage students to explore innovative approaches to improving practice even if there is no theoretical justification.

And, by the way, one other take away from this article for us older faculty and students in EdD programs, is that the most important work of this mathematician occurred when he was 70. This contradicts the often cited notion that science and math is a young "Man's" game and that folks are over the hill by 30 or 40. In other words, faculty and students over the age of 40, in any field, still make important discoveries. Science is also finally acknowledging and honoring the important contributions of women. So let's embrace as a positive the fact that EdD programs tend to work with more experienced, older students than PhD programs, and also view as a plus the highly diverse nature of our students.        

Saturday, April 30, 2016

Happenings at the 2016 AERA Conference—Part II

Having a chance to meet and talk with EdD faculty from around the country is a wonderful chance to listen and learn, Several mentioned that they taught quantitative methods to both PhD and EdD students at the same time. Given that my book, Authentic Quantitative Analysis for Leadership Decision-Making is geared specifically for EdD students, I suggested that they differentiate the course somewhat. They could use this book for the EdD students, and use Chapters 1-4, and the chapter on the literature review with the PhD students, while also having a more traditional text specifically for the PhD students that delved deeper into methodology and internal validity. So some of the reading assignments could overlap and others be differentiated.

The thing that the folks I met were most impressed with about the methods in the book was the simplistic beauty of being able to teach EdD students to critique the most sophisticated quantitative experimental research by looking for just three numbers in order to determine the practical significance of the findings.

The most negative comment I encountered is where one person, after looking at the table of contents, was insulted that the book covered her entire course in just 18 pages. I noted that the book might give her some ideas of additional things she could incorporate into her course. However, she looked at me as though I was an idiot and walked off in a huff. Fortunately, that was the exception. Everyone else was very open to the notion that there was a need to reform how quantitative methods were taught, and that there was now a resource that would now provide them with the resource and credibility to do so. Furthermore, everyone understood that once students understood how to critique quantitative research all faculty could integrate more such research into their courses.

A final suggestion is that all faculty should encourage their students who attend AERA or any professional conference to spend some time in the exhibitor area. It is a place where they can see the latest publications from many publishers and meet with editors to discuss publication ideas.