Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Genuine question: why have these models all been trained to sound so confident? Is it not possible to have rewarded models that announced their own ignorance? Or is even that question belying an "intelligence" view of these models that isn't accurate?


I think you are confusing ChatGPT with AI. ChatGPT is a statistical fiction generator. It sounds confident because it is writing fiction. It sounds confident for precisely the same reason that billions of ignorant people world wide post "facts" on line sound confident: they are incapable of understanding their ignorance. They are just systems that have inputs and then generate outputs.


I believe the GPT4 "paper" mentions that the RHLF part destroyed the models ability to gauge it's confidence.


The problem is that the model doesn't know if anything it's saying is true or false, so trying to make it 'fact check' just means it will constantly interrupt itself regardless of the accuracy of the output.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: