Sunday, April 17, 2016

Happenings at the 2016 AERA Conference

I just returned from the 2016 AERA (American Education Research Association) conference in Washington DC. I set up a booth in the exhibitor area to highlight the book (Authentic Quantitative Analysis for Leadership Decision-Making). In a bit if irony I was across from the Harvard Education Press booth and they probably had a 100 books on display—my booth only had the one. At the same time, mine was the only booth dedicated to the EdD, and I was pleasantly surprised at the large number of people who stopped by.

I must have had conversations with individuals from 60-70 EdD programs from around the country. These conversations confirmed that many are concerned about the state of how quantitative research is taught. There is a sense that something is wrong as students are increasingly turning to qualitative research. There is nothing wrong with qualitative research if it is being used for the right reason—as opposed to students feeling that quantitative methods are too difficult and that they cannot master it, as well as not seeing the relevancy of the traditional complex quantitative methods for their practice.

There was a tremendous response to the ideas in the book and many were drawn to moving quantitative methods from being a course on statistics to one that focuses on leadership decision-making. It is also becoming clearer the forms of statistical analyses used for PhD programs to test theory and those used to inform leadership decision-making are different. It is not that the statistics are different, but the degree of statistical methodological control and criteria for interpreting the results are different. The big problem for practice is that the forms of statistical analyses typically found in published quantitative research tend to over-estimate the importance of the findings for improving practice in the real world. In other words, the methods used in published research on the effectiveness of practices being tested are overly complex and unintelligible, and then in the end the results are misleading.

When I talk to professors who specialize in policy and practice , but who are not methodologists, about these problems the typical comment is that they do not get involved with, or understand, quantitative research. The result is that we have as a profession have abdicated responsibility for making important decisions about what is effective and left that to statisticians. The statisticians/methodologists have developed powerful techniques that enable them to declare small differences as having important practical importance. But the reality is that these do not have actual real world importance (see the earlier post about the problems that small differences are causing in psychology).


That is why the focus of the book is on much simpler and more accurate ways to determine the practical importance of quantitative research evidence. It is not just that this is important for making better decisions as to what practices are likely to be effective in your settings, it is important for our profession to retake responsibility for interpreting evidence as to effective practices. This book is a critical tool for enabling each and every one of us who are the ones who best understand the dynamics within schools and the needs and abilities of students and teachers to stop being cowed by quantitative evidence and to critically embrace the valuable information contained within quantitative data.  

No comments:

Post a Comment