Evaluating Research

If teachers are to use evidence-based practices, we must develop the skills to evaluate the quality of the evidence. These articles show how to read and analyse research reports from an appropriately critical perspective.

Research-based training

 

Evaluating Research

Sorting the wheat from the (research) chaff: a rough guide. (The Snow Report, 2014)

This invaluable 'consumer's guide to research' by Pamela Snow is an excellent place for anyone who wishes to take an astute and critical role in evaluating educational research. Highly recommended. 

Go to website → 

 
Levels of Evidence (Ebling Library, University of Wisconsin)

This web page provides a clear hierarchy for levels of evidence in research and an excellent outline on how to evaluate a research paper. The subject context of this page is nursing, but the information is equally relevant to education research. This information should be mandatory in all teacher training programmes.

Go to website → 

 
Visible Learning (Hattie J 2008)

This massive analysis of educational research is an excellent starting point for investigating the effectiveness of different approaches. Hattie himself points out that the findings are not meant to be conclusive, but should serve as indicators of where to look more closely. (NB This link will take you to Amazon).

Go to website → 

 
John Hattie on Visible Learning (researchED Magazine)

This interview from the Swedish employee magazine: educ.alla Utbildningsförvaltningen – Göteborg also appeared in the ResearchEd online magazine. Hattie is quizzed on the usefulness of effect sizes, criticisms over his calculations of statistics, and the insights research may lend to specific educational debates. Hattie’s justification for training teachers in research:

“It is certainly the case that many do not want to believe evidence as their own ‘experiences’ tell them different. Research starts from the premise of attempting to falsify your pet theories and many parents, teachers and politicians work from the premise of attempting to find beliefs and evidence to support their prior beliefs. This confirmation bias means they spend millions of dollars on the wrong issues, and thus do major damage to the learning of our children.” 

Go to website → 

 
It's the Effect Size, Stupid - what effect size is and why it is important (Coe R 2002) University of Durham School of Education from Education-Line

A key tool for evaluating research, especially meta-analyses, is effect size. Here, Robert Coe outlines what it is, how it can be used and how it should not be used. Essential reading for those who are serious about knowing not just what works, but how well an approach may work in different contexts.

Go to website → 

 
Randomised Control Trials and their limitations for use within educational research. (Hassey, N., undated)  

 Nick Hassey raises questions about the use of RCTs in educational research, suggesting that many common assumptions amongst educators about this approach may be incorrect. The ensuing comments on the post provide an informative discussion. 

Go to website →

 
Some Problems With “Action Research” (ifitsgreenitsbiology blog,  January 2015)

This blog post critiques another which claimed to provide evidence on the relative typing speeds on iPads versus traditional computers. The ‘evidence’ is examined from a number of different perspectives. It is an excellent example of why teachers should be cautious about how we evaluate ‘evidence’. 

Go to website →

 
Improving Education: a Triumph of Hope Over Experience (Coe R 2013) Durham University Centre for Evaluation and Monitoring

Robert Coe presents a summary of research which takes into account the quantity and quality of the evidence available, as well as comparing the effect of such approaches with the cost of implementation. He argues that schools need to make better use of research and also to implement it carefully and rigorously.

Go to PDF → 

 
Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research To Make Curricular and Instructional Decisions (Stanovich, P J and Stanovich K E 2003)

An excellent overview of the ways in which research may be applied practically in the classroom.

Go to PDF → 

 
Annotation: Does Dyslexia Exist? (Stanovich K E 1994) Journal of Child Psychology and Psychiatry Vol 35: 4 579-595

Stanovich outlines the issues in defining 'dyslexia', and examines the lack of empirical evidence for the long-standing assumption that it is defined by a discrepancy between (low) reading performance and high IQ. This 'folk belief' has not only become pervasive in the media but also in educational research and practice. 

Go to PDF → 

 
Teaching Children to Read: the fragile link between science and federal education policy (Camilli G, Vargas S and Yurecko M 2003) National Institute for Early Education Research and Rutgers University retrieved from Education Policy Analysis Archives Vol 11:15

In 2000 the National Reading Panel presented its findings on the Essential Components of Reading Instruction. The authors of this study re-evaluated the NRP’s meta-analysis and reached significantly different conclusions.

Go to website →

 
Literacy as a complex activity: deconstructing the simple view of reading (Stuart M, Stainthorp R and Snowling M 2008) Literacy 42(2)

The authors explain the underlining complexities and interactions implied by the Simple View of Reading, a model which posits that successful reading is an interaction between spoken language, word recognition and reading comprehension. They contend that the Simple View of Reading is well aligned with the findings of over two decades of research.

Go to PDF →

 
What Does Evidence-Based Practice in Education Mean? (Hempenstall, K 2006): LDA National Conference paper

Kerry Hempenstall discusses what should and should not be considered “evidence” when using research to inform practice. 

Go to website →

 
A Brief Critique of Hart and Risley (Nation I S P undated) LALS, Victoria University of Wellington, NZ

This concise but clear critique of Hart and Risley’s important paper raises key questions about methodology and how much weight should be placed on the findings of the study. This is an excellent example of how to ask the right questions when evaluating a research report.

Go to PDF →

 
Whole-Language High Jinks – How to Tell When ‘Scientifically-Based Reading Instruction’ Isn’t (Moats, L January 2007): Thomas B Fordham Institute

Louisa Moats analyses how equivocal language is used to justify the use of programmes and practices which are not supported by scientifically-based research.

Go to PDF →

 
Small Bangs for Big Bucks - the long term efficacy of Reading Recovery (Wheldall, K February 2013): Notes from Harefield

Kevin Wheldall, Emeritus Professor of Macquarie University, questions the conclusions of a report into the long-term efficacy of Reading Recovery. He points out that the report’s conclusions are at odds with the details of RR and non-RR students’ achievement. Wheldall’s article is an excellent example of why ‘research’ should always be read critically. 

Go to website →

 
Reading Into Reading Recovery (Patterson R 2013) The New Zealand Initiative
Rose Patterson summarises the concerns of Chapman and Tumner in their critique of Reading Recovery in New Zealand, and challenges complaceny about the effectiveness of the programme and the national approach to teaching reading in general. 
 

Go to website → 

 
Machinations of What Works Clearing House (Engelmann, S 2008): Zig Site

Zig Engelmann challenges the inconsistencies and apparent contradictions in the way  “What Works Clearinghouse” selects and evaluates research as evidence for effective interventions. Not all approaches are treated equally. 

Go to PDF →

 
Examining the Inaccuracies and Mystifying Policies and Standards of the What Works Clearinghouse: Findings from a Freedom of Information Act Request (Wood T W 2014) National Institute for Direct Instruction website

Despite its claims to be a reliable and trusted guide to what works in education, the What Works Clearinghouse is shown in this report to have serious weaknesses in the accuracy of its reports – particularly with regard to evaluating the fidelity of implementation of interventions. The high-stakes nature of the WWC’s role makes it imperative that these problems are addressed.

Go to website →

 
Follow-Up of Follow Through: the Later Effects of the Direct Instruction Model on Children in Fifth and Sixth Grades (Becker C H and Gersten R 2001) Journal of Direct Instruction 1: 1

Three years after Project Follow Through, Wes Becker followed up a large sample of students to evaluate the longer term impact of Direct Instruction. The strongest findings showed lasting gains in word decoding, spelling and maths computation. Becker discusses a range of research problems that had to be overcome in order to conduct this study.

Go to website →

 
Teaching Reading and Language to the Disadvantaged - What We Have Learned from Research (Becker W C 2001) Journal of Direct Instruction 1: 1

Wes Becker discusses the history of educational research initiatives in the US, and suggests reasons for the apparent failure of many of these. He also draws conclusions as to what approaches would make the most significant difference to economically disadvantaged children.

Go to website →

 
Critique of Lowercased d i (direct instruction)

Zig Engelmann analyses the origins of the terms in modern education and considers the teacher training problems inherent in using (lowercase) direct instruction.

Go to website →

 
Every Child a Reader: An example of how top-down education reforms make matters worse (Policy Exchange 2009) 

This article examines the ways in which political agendas carry more weight than does evidence of success when policy decisions are made regarding particular interventions or teaching methods. Focusing on the implementation of the Every Child A Reader strategy, the central component of which is Reading Recovery, the report is critical of the limited evidence of effectiveness, the high costs and the inflexibility of the overall strategy. 

Go to PDF →

 
Meta-Analytic Validation of the Dunn and Dunn Model of Learning-Style Preferences: A Critique of What Was Dunn (Kavale K A, Hiroshoren A and Forness S R 1998) Learning Disabilities Research and Practice 13(2), 75-80 from danielwilingham.com website

This excellent article demonstrates what happens when speculative claims are tested scientifically. It summarises an ongoing debate on ‘learning styles’ and the various issues raised, objected to and refuted. It is also entertainingly written.

Go to PDF →

 
Constructivism in Education: Sophistry for a New Age (Kozloff M A 1998) University of North Carolina, Wilmington website

Martin Kozloff critically analyses the philosophical and rhetorical problems associated with much ‘constructivist’ pedagogy. He argues that the prevalence of muddled thinking in education is responsible for widespread educational mediocrity.

Go to PDF →

 
Is Brain Gym an Effective Intervention? (Spaulding L S, Mostert M P and Beam A P 2010)

An analysis of the extant research material on Brain Gym. Both on face validity and effectiveness data, no support is found for the claims of this programme.

Go to website →

Research-based training