Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's just a perspective, but Bayesian inference makes the prior explicit, wheareas it is implicit with frequentist inference. You can't not have a prior.


That's fair, it forces you to document your assumptions. But at some point as you drill down into assumptions, the trail will grow cold. The article even talks about inserting priors that are either subjective or some default function provided by the software.

A possible third approach is to create a hypothetical model and feed random data through it, to get an idea of the spread of the outcomes. Modeling doesn't stumble on conditional terms, and if you're unclear on an assumption, it won't run. The computer doesn't know whether it's a frequentist or a Bayesian, or something different from both.

I'm not a statistician. Whenever I need to do something with statistics, I always test my computation with random data.

In fact I wonder, if modeling had been possible since the birth of statistics, if we would even bother with things like elaborate formulas for statistical tests.


Bayesian stats actually let you integrate stochasticity into deterministic models fairly easily, in a way you can't really do with frequentist stats. Bayesian methods are common for geophysical inverse modelling, for example. Probabilistic programming does this, for example. Exact Bayesian inference is impossible in the general case, but approximate inference often works well.

Subjective priors are one of the main advantages of Bayesian stats. Regularization used in ML corresponds to using subjective priors, for example. L2 regularization finds the MAP with a normal prior, favoring parsimonious solutions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: