The way that some education studies are presented could give teachers unrealistic expectations about what they might be able to achieve in the classroom, according to new research.
Education studies are important for improving standards of teaching around the world and many schools adopt certain interventions if they believe it will help raise students’ grades.
In the UK, the Education Endowment Foundation (EEF) commissions hundreds of trials each year which are summarised and published online and used by around two-thirds of early years, primary, secondary and sixth form schools in Britain.
However, a new paper published in the journal Educational Researcher shows that these reviews are being presented to teachers in ways which could misrepresent their potential impact on the education of pupils.
Dr Hugo Lortie-Forgues, of Loughborough University, said that although several metrics can be used to communicate the success of a trial, there is no consensus about which is best suited for communication with teachers.
The researchers found that different metrics induced very different perceptions of an intervention’s effectiveness. This situation could lead teachers to overestimate or underestimate the usefulness of the interventions.
Different types of metric used:
- Months of Progress: Additional learning gain reported in a unit of months, based on an estimate of yearly growth
- Percentile Gain: Expected change in percentile rank an average student would have made had the student received the intervention
- Cohen’s U3: Percentage of students in the intervention group scoring above the mean of the control group (Cohen, 1988)
- Threshold: Proportion of students reaching a certain threshold (e.g., passing a test).
- Test Score: Impact of the intervention in the outcome’s units
Dr Lortie-Forgues said: “In recent years, there has been a growing effort to produce high-quality evidence in education.
“In education research, an intervention’s impact is typically reported in units of standard deviations, but this measure is hard to interpret, so studies are generally translated into more digestible metrics before being reported to teachers.
“However, very little research has examined how to present this evidence in ways that maximize the ability of teachers to make informed decisions.”
The paper, How should educational effects be communicated to teachers?, reports two studies involving 500 teachers which were carried out by Dr Lortie-Forgues and Dr Matthew Inglis from Loughborough and Dr Ut Na Sio from the University of Sheffield.
In the first, the researchers found that teachers have strong preferences concerning effect size metrics in terms of informativeness, understandability, and helpfulness, and that these preferences challenge current research reporting recommendations.
In the second, they found that different metrics provoked different perceptions of an intervention’s effectiveness.
For example, when an intervention impact was described in additional months of progress, as often done in the UK, teachers perceived the intervention as much more effective than when the same impact was reported in terms of additional point students receiving the intervention gain on a standardized test.
Dr Lortie-Forgues said: “Together, our findings suggest that current way intervention effects are communicated could interfere with teachers’ ability to make informed decisions and to develop realistic expectations.
“A possible way to minimize this issue would be to communicate the impact of educational interventions using multiple metrics, as it is often done in leading medical journals. This is something that we will explore in future research.”