The IPPR's "Making the Difference" report was full of incorrect statistics. Part 3
In 2017, the IPPR published false and misleading information about exclusions. It has distorted the debate ever since.
The story so far
Over the past few months, I’ve noticed that many dubious statistics about exclusions from schools in England first appeared in a single report. This report, Making the Difference, was published by the IPPR1 think tank in 2017.
In Part 1, I looked at the claims made in the report’s “60-second summary” about the risk factors for exclusion and the impact of exclusions.
In part 2, I looked at some other dubious claims in that summary.
In this post, I will discuss why these terrible statistics matter. I will also revisit two of the daftest statistics from the previous posts and show how frequently they have been repeated.
Do incorrect statistics matter?
It’s worth explaining why I think the frequent use of inaccurate statistics in the public debate about exclusions is harmful. It’s not that I disagree with the idea that exclusions are correlated with poor outcomes. I don’t dispute the fact that excluded pupils face more difficulties than the average pupil. Nor should there be any doubt that those pupils who go on to be excluded are more likely to come from disadvantaged homes; to have done badly in their education, and to have generally had a harder life than the average pupil. However, a sensible argument about these issues needs to be about correlation and causation. I don’t think that the poor outcomes for excluded pupils are a result of their exclusion.
It could be argued that, if my main concern is about the interpretation of exclusion statistics, none of my claims about accuracy matter. Even if the statistics showing correlations between exclusions and poor outcomes are inaccurate, they often point in the right direction. Perhaps I am just being picky.
There are two reasons I think this kind of inaccurate statistic matters.
The persistence of incorrect “zombie” statistics that might become more misleading over time.
Sloppiness with statistics rarely ends with just incorrect numbers; it also leads to incorrect interpretations of data.
Zombie statistics do not die
Statistics used as propaganda can persist indefinitely; they are just repeated again and again. If we look at two of the most absurd claims in the IPPR’s report, we can see how far they have spread, and how widely they have been accepted. The two silliest claims made by the IPPR are probably these:
“Excluded children are… seven times more likely to have a special educational need” This is absurd to anyone familiar with England’s SEND2 system, as more than one in seven pupils have a special educational need.3
“Every cohort of permanently excluded pupils will go on to cost the state an extra £2.1 billion in education, health, benefits and criminal justice costs.” This is ridiculous because there is no way to show that any of these costs, other than what is spent on education, are “extra” and would not have been incurred even without those pupils being excluded.
Who was sloppy enough to repeat the claim that excluded pupils were seven times more likely to have a special educational need?
The following are among those who repeated the impossible claim that excluded pupils were seven times more likely to have a special educational need:
Dr Juste Abramovaite, Institute for Global Innovation, University of Birmingham (evidence to parliament).
Children and Young People’s Mental Health Coalition (evidence to parliament)
National Association of Headteachers (NAHT) (evidence to parliament)
Dr Rosie Ridgeway, Durham University. (Conference paper)
Local Child Safeguarding Practice Review (Merton Safeguarding Children Partnership)
This is a pretty impressive circulation for a statistic that anyone familiar with the SEND system (and able to multiply by seven) would know to be false immediately.
Even if we could reform the SEND system so that SEND was no longer a risk factor for exclusion, we might expect the claim that “Excluded children are … seven times more likely to have a special educational need” to persist. Since it was never actually true, there is no reason to think that activists would stop claiming it, even if SEND was no longer a risk factor for exclusion.
Who was gullible enough to repeat the claim that exclusions cost the state £2.1 billion?
The claim about the cost of exclusions was also extremely suspicious. So who repeated it without any apparent scepticism?4
Lord Wooley of Woodford (in the House of Lords)
Rob Halfon MP (then Chair of the Commons Education Committee) in the Telegraph
Thesis - Tavistock and Portman NHS Trust And University of Essex
This is even more impressive circulation than the previous statistic. There seems to be an information ecosystem involving the media (national and specialist); local government; education businesses; activists; academics, and education charities. In that ecosystem, false statistics are constantly recycled. I can only assume that this is because no one is interested in the truth about certain issues. This cannot be a healthy climate for good policymaking, or informed debate.
Bad statistics and bad interpretation of statistics
It seems likely that people who get the statistics wrong are more likely to interpret statistics incorrectly. I believe that those of us who look at statistics carefully, and in detail, are also more likely to interpret statistics correctly. While getting the statistics wrong as a matter of fact and interpreting statistics incorrectly are two separate issues, they often seem to go hand in hand. If we listen to people who claim that “Excluded children are … seven times more likely to have a special educational need” we are listening to people who do not understand even the most basic facts about the SEND system. If we listen to people who claim that exclusions cost billions, we are listening to people who do not understand correlation and causation. If we listen to people who don’t care about what is true, then we are listening to people whose conclusions are not based on the facts. I am not suggesting that we can turn education into a technocratic domain, where only statisticians have authority. However, we should not allow the terms of the debate to be set by people who know nothing about, or don’t care about, the facts. One of the most disturbing aspects of my lists above is that almost all the theses I linked to were for doctorates in educational psychology. How many of the next generation of educational psychologists working with schools in England will be entirely lacking in professional scepticism and heavily indoctrinated to think that exclusions are evil?
We must also be cautious of those who originated these dubious statistics. I believe they have little to contribute to understanding the issue of exclusions. Because no one challenges false statistics, their authority remains undiminished. The IPPR is still widely respected as a think tank. The lead author of the IPPR report, Kiran Gill, is particularly influential. She is chief executive of The Difference, a charity with a record of being very wrong about exclusions. She was a member of the reference group for the Timpson Review of School Exclusions. It was recently announced that she will be among those advising Ofsted5 on inclusion. Unfortunately, it’s possible to rise to prominence as an education expert by producing worthless research that seriously damages public understanding of an important issue.
Institute for Public Policy Research
Special Educational Needs and Disabilities.
Because permanent exclusions are rare, the level of SEND among pupils who haven’t been excluded is roughly the same as for the entire pupil population.
This list includes those who quoted the £370,000 per pupil figure, as well as those who quoted the £2.1 billion total.
England’s school inspectorate.