I don’t mean it as a criticism! I think that their choice to study a dataset of accidents resulting in fatalities—was reasonable, and that it was admirable to reiterate that qualifier in the conclusion.
I’m also sensitive to the parent commenter’s point that the choice of indicator was kind of narrow. But it makes sense to me why academics might choose a narrow but highly reliable dataset, and try to be very transparent about that: safety is certainly much broader than just fatality-accidents, but fatality-accidents are awfully hard to fudge long-term records for.
> I’m also sensitive to the parent commenter’s point that the choice of indicator was kind of narrow.
But you're not. We can separate the parent comment into two claims:
(1) The indicator was accidents;
(2) The indicator is too narrow.
You've supported (2) by pointing out that the parent commenter's idea of what the choice of indicator was was completely incorrect. But everything they said was wrong! They were wrong about the indicator, and they were wrong that the indicator they called excessively narrow was excessively narrow. They weren't saying that the paper judged by a criterion that was too narrow, independently of what that criterion was. (1) and (2) are both claims you can mine out of the parent comment, but they're not claims that were made separately in the parent comment.
"Accidents" covers everything you might want to know about. It's crazy to say that looking at accident rates is "suspiciously specific". It's crazy because it's as nonspecific as you can get.
This makes no sense as a criticism of "as measured by accidents". The fix for measuring by accidents is to measure by accidents?