Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

a) Still true: vanilla LLMs can’t do math, they pattern-match unless you bolt on tools.

b) Still true: next-token prediction isn’t planning.

c) Still true: error accumulation is mitigated, not eliminated. Long-context quality still relies on retrieval, checks, and verifiers.

Yann’s claims were about LLMs as LLMs. With tooling, you can work around limits, but the core point stands.



LeCun's argument was that a single erroneous token would derail further response.

This is, obviously, false: a reasoning model (or a non-reasoning one with a better prompt) can recognize error and choose a different path, the error will not be the part of an answer.

You're talking about a different problem: context rot. It's possible that an error would make performance worse. So what?

People can also get tired when they are solving a complex problem. People use various mitigations: e.g. it might help to start from a clean sheet. These mitigations might also apply to LLM: e.g. you can do MCTS (tree-of-thought) or just edit reasoning trace replacing the faulty part.

"LLMs are not absolutely perfect and require some algorithms on top thus we need a completely different approach" is a very weird way to make a conclusion.


My man, math is pattern matching, not magic. So is logic. And computation.

Please learn the basics before you discuss what LLMs can and can't do.


I'm no expert on math but "math is pattern matching" really sounds wrong.

Maybe programming is mostly pattern matching but modern math is built on theory and proofs right?


Nah, its all pattern matching. This is how automated theorem provers like Isabelle are built, applying operations to lemmas/expressions to reach proofs.


I'm sure if you pick a sufficiently broad definition of pattern matching your argument is true by definition!

Unfortunately that has nothing to do with the topic of discussions, which is the capabilities of LLMs, which may require a more narrow definition of pattern matching.


Automated theorem provers are also built around backtracking, which is absent in LLMs.


When an LLM does it, it's pattern matching.

RL training amounts to pattern matching.

How does an LLM decode Base64? Decode algorithm? No - predictive pattern matching.

An LLM isn't predicting what a person thinks - it's predicting what a person does.


a) no, gemini 2.5 was shown to "win" gold w/o tools. - https://arxiv.org/html/2507.15855v1

b) reductionism isn't worth our time. Planning works in the real world, today. (try any agentic tool like cc/codex/whatever). And if you're set on the purist view, there's mounting evidence from anthropic that there is planning in the core of an LLM.

c) so ... not true? Long context works today.

This is simply moving goalposts and nothing more. X can't do Y -> well, here they are doing Y -> well, not like that.


a) That "no-tools" win depends on prompt orchestration which can still be categorized as tooling.

b) Next-token training doesn’t magically grant inner long-horizon planners..

c) Long context ≠ robust at any length. Degradation with scale remains.

Not moving goalposts, just keeping terms precise.


My man, you're literally moving all the goalposts as we speak.

It's not just "long context" - you demand "infinite context" and "any length" now. Even humans don't have that. "No tools" is no longer enough - what, do you demand "no prompts" now too? Having LLMs decompose tasks and prompt each other the way humans do is suddenly a no-no?


I’m not demanding anything, I’m pointing out that performance tends to degrade as context scales, which follows from current LLM architectures as autoregressive models.

In that sense, Yann was right.


Not sure if you're just someone who doesn't want to ever lose an argument or you're actually coping this hard


I just see a lot of people who’ve put money in the LLM basket and get scared by any reasonable comment about why LLMs aren’t almighty AGIs and may never be. Or maybe they are just dumb, idk.


Even the bold take of "LLMs are literally AGI right now" is less of a detour from reality than "LLMs are NEVER going to hit AGI".

We've had LLMs for 5 years now, and billions were put into pushing them to the limits. We are yet to discover any fundamental limitations that would prevent them from going all the way to AGI. And every time someone pops up with "LLMs can never do X", it's followed up by an example of LLMs doing X.

Not that it stops the coping. There is no amount of evidence that can't be countered by increasing the copium intake.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: