Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't understand what point you are making. Doesn't the name "Reasoning language models" claim that they can reason? Why do you want to see it explicitly written down in a paper?


This very paper sits on the assumption reasoning (to solve puzzles) is at play. It calls those LLMs RLMs.

Imo the paper itself should have touched on the lack of paper discussing what's in the blackbox that makes them Reasoning LMs. It does mention some tree algorithm supposedly key to reasoning capabilities.

By no means attacking the paper as its intent is to demonstrate the lack of success to even solve simple to formulate, complex puzzles.

I was not making a point, I was genuinely asking in case someone knows of papers I could read on that make claims with evidence that's those RLM actually reason, and how.


By renaming this binary to a "Mind reading language model" We now can read your mind and predict your choices just by chatting.

Don't ask how it works cuz its called a "Mind reading language model" duh.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: