Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's a difference between being able to do logical reasoning, and being able to do everything a competent white collar worker can do. For one, there's a token limit and memory limit, which limits the scale of the problems current iterations of GPT can do (and this limitation is not a limitation of logical reasoning ability). There's also (for GPT) no way for one to fine tune or train the model to work better in a specific domain unless you're in bed with OpenAI/Microsoft.

I think as a society we don't really have precise words for describing the different levels of intelligence, except in a "I know it when I see it" way. I don't think I'm LARPing in any way, I'm probably even less excited with it than you are given that you seem to be using it more often than I am. I'm just saying I think GPT does exhibit some logical reasoning abilities and not merely remembering statistical patterns.



> I'm just saying I think GPT does exhibit some logical reasoning abilities and not merely remembering statistical patterns.

I agree with most of your reply, except this bit.

I mean, generating a response through statistical language patterns is a sort of reasoning, and ChatGPT has been accurate enough to replace internet search for me quite often. But it also generates bullshit that an untrained eye would miss (because the bullshit it generated was statistically plausible).

When it get things wrong it generates some comically wrong behavior. I had one case where it looped through variations of the same wrong response - precisely because it is unable to do any kind of logical reasoning upon faulty or inaccurate data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: