That fuzziness can be quantified. It's called error bars. Whenever physicists perform a measurement, they always derive a confidence interval from the instruments they use. They take great care of accounting for the limits of each individual instrument, perform error propagation and report the uncertainty of the final result.
Consider figure 5 of the following article for example:
The differently shaded ellipses represent different confidence levels. For the largest ellipsis, the probability of the true values being outside of it is less than 1%. We call that 3-sigma confidence.
> Whenever I read things like "This model can't explain the bullet cluster, or X rotation curve, so it's probably wrong" my internal response is "Your underlying data sources are too fuzzy to make your model the baseline!"
Well, then do some error analysis and report your results. Give us sigmas, percentages, probabilities. Science isn't based on gut feelings, but cold hard numbers.
It's not just a question of instrumental error though. There are also assumptions being used in interpreting the data from the instruments, and it's not generally possible to assign them reliable probabilities.
e.g. the first line of the article's abstract quoted above:
"Supernova (SN) cosmology is based on the key assumption that the luminosity standardization process of Type Ia SNe remains invariant with progenitor age."
If the results reported in the article are right, the confidence we should have in this assumption, and therefore any results relying on it, have just radically changed.
My concern is model accuracy holistically; analyzing likelyness-to-be-correct including all assumptions; I think the post you are responding to is in context.
Consider figure 5 of the following article for example:
https://arxiv.org/abs/1105.3470
The differently shaded ellipses represent different confidence levels. For the largest ellipsis, the probability of the true values being outside of it is less than 1%. We call that 3-sigma confidence.
> Whenever I read things like "This model can't explain the bullet cluster, or X rotation curve, so it's probably wrong" my internal response is "Your underlying data sources are too fuzzy to make your model the baseline!"
Well, then do some error analysis and report your results. Give us sigmas, percentages, probabilities. Science isn't based on gut feelings, but cold hard numbers.