> The conclusion from the analysis of the accident data is that there is no evidence for the hypothesis that railway safety, as measured by accidents, has become worse since privatisation.
This article was written in 2007, just a few years after Network Rail was re-nationalised in 2002.
Since 2002, there has only been a single passenger fatality attributable to poor maintenance: the Grayrigg derailment in 2007.
This followed a cluster of high profile crashes during the Railtrack (privatisation) era in 1997-2002 that killed dozens, and were directly or partly attributable to poor maintenance.
I’d argue that the longer Network Rail’s good safety record continues, the more we can disregard that article.
Read the bottom of page 17 and page 18 of the article I linked to.
Given where it was published (a magazine backed by statisticians associations in three countries) it would have been very damaging to the author's (a professor at Imperial College) reputation to have been sloppy about conclusions.
This be enough if there is metric that shows worse safety:
The conclusion from the analysis of the accident data is that there is no evidence for the hypothesis that railway safety has become worse since privatisation.
The study at hand seems to have considered only fatal accidents. Probably for good statistical reasons. But the most obvious next “what” to my mind would be accidents that did not result in fatalities.
Followed closely by safety-involved “incidents” that were reportable somewhere (internally or to a regulator), but that luckily didn’t result in an accident that time.
One imagines these must get really hard to measure reliably the further they get from concrete death records.
I don’t mean it as a criticism! I think that their choice to study a dataset of accidents resulting in fatalities—was reasonable, and that it was admirable to reiterate that qualifier in the conclusion.
I’m also sensitive to the parent commenter’s point that the choice of indicator was kind of narrow. But it makes sense to me why academics might choose a narrow but highly reliable dataset, and try to be very transparent about that: safety is certainly much broader than just fatality-accidents, but fatality-accidents are awfully hard to fudge long-term records for.
> I’m also sensitive to the parent commenter’s point that the choice of indicator was kind of narrow.
But you're not. We can separate the parent comment into two claims:
(1) The indicator was accidents;
(2) The indicator is too narrow.
You've supported (2) by pointing out that the parent commenter's idea of what the choice of indicator was was completely incorrect. But everything they said was wrong! They were wrong about the indicator, and they were wrong that the indicator they called excessively narrow was excessively narrow. They weren't saying that the paper judged by a criterion that was too narrow, independently of what that criterion was. (1) and (2) are both claims you can mine out of the parent comment, but they're not claims that were made separately in the parent comment.
"Accidents" covers everything you might want to know about. It's crazy to say that looking at accident rates is "suspiciously specific". It's crazy because it's as nonspecific as you can get.
I don't however remember the conclusion!
I've tracked down the article to https://academic.oup.com/jrssig/article-abstract/4/1/15/7029... but I don't have access.