Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Did you even read the article you posted? It supports my statement.

CoT produces the linguistic scaffolding for reasoning, but doesn't actually provide much accuracy in doing so.

e.g. https://developer.nvidia.com/blog/maximize-robotics-performa...



The article I posted says that chain of thought is enough to make something a reasoning model. It's listed as one of the types.

It says that reasoning models "now aim for" more, but that doesn't disqualify the older more basic techniques.

I'm not sure what I'm supposed to learn from your link because there's so much vision stuff happening here and that's the focus of the article. Can you link me something about what you need to make a bare minimum reasoning LLM? Text only.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: