Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Foundation models can be seen as approximate amortized posterior inference machines where the posterior is conditioning on the pre-training data. However, the uncertainty is usually ignored, and there may be ways to improve the state of the art if we were better Bayesians.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: