Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have done this test extensively days ago, on a dozen models: no one could count - all of them got results wrong, all of them suggested they can't check and will just guess.

Until they will be able of procedural thinking they will be radically, structurally unreliable. Structurally delirious.

And it is also a good thing that we can check in this easy way - if the producers patched the local fault only, then the absence of procedural thinking would not be clear, and we would need more sophisticated ways to check.



If you think about the architecture, how is a decoder transformer supposed to count? It is not magic. The weights must implement some algorithm.

Take a task where a long paragraph contains the word "blueberry" multiple times, and at the end, a question asks how many times blueberry appears. If you tried to solve this in one shot by attending to every "blueberry," you would only get an averaged value vector for matching keys, which is useless for counting.

To count, the QKV mechanism, the only source of horizontal information flow, would need to accumulate a value across tokens. But since the question is only appended at the end, the model would have to decide in advance to accumulate "blueberry" counts and store them in the KV cache. This would require layer-wise accumulation, likely via some form of tree reduction.

Even then, why would the model maintain this running count for every possible question it might be asked? The potential number of such questions is effectively limitless.


Did you enable reasoning? Qwen3 32b with reasoning enabled gave me the correct answer on the first attempt.


> Did you enable reasoning

Yep.

> gave me the correct answer

Try real-world tests that cannot be covered by training data or chancey guesses.


Counting letters is a known blindspot in LLMs because of how tokenization works in most LLMs - they don't see individual letters. I'm not sure it's a valid test to make any far-reaching conclusions about their intelligence. It's like saying a blind person is an absolute dumbass just because they can't tell green from red.

The fact that reasoning models can count letters, even though they can't see individual letters, is actually pretty cool.

>Try real-world tests that cannot be covered by training data

If we don't allow a model to base its reasoning on the training data it's seen, what should it base it on? Clairvoyance? :)

> chancey guesses

The default sampling in most LLMs uses randomness to feel less robotic and repetitive, so it’s no surprise it makes “chancey guesses.” That’s literally what the system is programmed to do by default.


> they don't see individual letters

Yet they seem to be from many other tests (characters corrections or manipulation in texts, for example).

> The fact that reasoning models can count letters, even though they can't see individual letters

To a mind, every idea is a representation. But we want the processor to work reliably on them representations.

> If we don't allow a [mind] to base its reasoning on the training data it's seen, what should it base it on

On its reasoning and judgement over what it was told. You do not repeat what you heard, or you state that's what you heard (and provide sources).

> uses randomness

That is in a way a problem, a non-final fix - satisficing (Herb Simon) after random germs instead of constructing through a full optimality plan.

In the way I used the expression «chancey guesses» though I meant that guessing by chance when the right answer falls in a limited set ("how many letters in 'but'") is a weaker corroboration than when the right answer falls in a richer set ("how many letters in this sentence").


Most people act on gut instincts first as well. Gut instinct = first semi-random sample from experience (= training data). That's where all the logical fallacies come from. Things like the bat and the ball problem, where 95% people give an incorrect answer, because most of the time, people simply pattern-match too. It saves energy and works well 95% time. Just like reasoning LLMs, they can get to a correct answer if they increase their reasoning budget (but often they don't).

An LLM is a derivative of collective human knowledge, which is intrinsically unreliable itself. Most human concepts are ill-defined, fuzzy, very contextual. Human reasoning itself is flawed.

I'm not sure why people expect 100% reliability from a language model that is based on human representations which themselves cannot realistically be 100% reliable and perfectly well-defined.

If we want better reliability, we need a combination of tools: a "human mind model", which is intrinsically unreliable, plus a set of programmatic tools (say, like a human would use a calculator or a program to verify their results). I don't know if we can make something which works with human concepts and is 100% reliable in principle. Can a "lesser" mind create a "greater" mind, one free of human limitations? I think it's an open question.


> Most people act on gut instincts first as well

And we do not hire «most people» as consultants intentionally. We want to ask those intellectually diligent and talented.

> language model that is based on human representations

The machine is made to process the input - not to "intake" it. To create a mocker of average-joe would be an anti-service in both that * the project was to build a processor and * we refrain to ask average-joe. The plan can never have meant to be what you described, the mockery of mediocrity.

> we want better reliability

We want the implementation of a well performing mind - of intelligence. What you described is the "incompetent mind", the habitual fool - the «human mind model» is prescriptive based on what the properly used mind can do, not descriptive on what sloppy weak minds do.

> Can a "lesser" mind create a "greater" mind

Nothing says it could not.

> one free of human limitations

Very certainly yes, we can build things with more time, more energy, more efficiency, more robustness etc. than humans.


2b granite model can do this in first attempt

ollama run hf.co/ibm-granite/granite-3.3-2b-instruct-GGUF:F16 >>> how many b’s are there in blueberry? The word "blueberry" contains two 'b's.


I did include granite (8b) in my mentioned tests. You suggest granite-3.3-2b-instruct, no prob.

  llama-cli -m granite-3.3-2b-instruct-Q5_K_S.gguf --seed 1 -sys "Count the words in the input text; count the 'a' letters in the input text; count the five-letter words in the input text" -p "If you’re tucking into a chicken curry or a beef steak, it’s safe to assume that the former has come from a chicken, the latter from a cow"
response:

  - Words in the input text: 18
  - 'a' letters in the input text: 8
  - Five-letter words in the input text: 2 (tucking, into)
All wrong.

Sorry I did not have the "F16" available


So did Deepseek. I guess the Chinese have figured out something the West hasn't, how to count.


No, DeepSeek also fails. (It worked in your test - it failed in similar others.)

(And note that DeepSeek can be very dumb - in practice, as experienced in our practice, and in standard tests, where it shows an ~80 IQ, where with other tools we achieved ~120 IQ (trackingai.org). DeepSeek was in important step, a demonstration of potential for efficiency, a gift - but it is still part of the collective work in progress.)


https://claude.ai/share/e7fc2ea5-95a3-4a96-b0fa-c869fa8926e8

It's really not hard to get them to reach the correct answer on this class of problems. Want me to have it spell it backwards and strip out the vowels? I'll be surprised if you can find an example this model can't one shot.


(Can't see it now because of maintenance but of course I trust it - that some get it right is not the issue.)

> if you can find an example this model can't

Then we have a problem of understanding why some work and some do not, and we have a due diligence crucial problem of determining whether the class of issues indicated by the possibility of fault as shown by many models are fully overcome in the architectures of those which work, or whether the boundaries of the problem are just moved but still tainting other classes of results.


Gemini 2.5 Flash got it right for me first time.

It’s just a few anecdotes, not data, but that’s two examples of first time correctness so certainly doesn’t seem like luck. If you have more general testing data on this I’m keen to see the results and methodology though.


throwing a pair of dice and getting exactly 2 can also happen on the first try. Doesn't mean the dice are a 1+1 calculating machine


I guess my point is that the parent comment says LLMs get this wrong, but presents no evidence for that, and two anecdotes disagree. The next step is to see some evidence to the contrary.


> LLMs get this wrong

I wrote that of «a dozen models, no one could count». All of those I tried, with reasoning or not.

> presents no evidence

Create an environment to test and look for the failures. System prompt like "count this, this and that in the input"; user prompt some short paragraph. Models, the latest open weights.

> two anecdotes disagree

There is a strong asymmetry between verification and falsification. Said falsification occurred in a full set of selected LLMs - a lot. If two classes are there, the failing class is numerous and the difference between the two must be pointed at clearly. Also since we believe that the failure will be exported beyond the case of counting.


I tested it the other day and Claude with Reasoning got it correct every time


The interesting point is that many fail (100% in the class I had to select), and that raises the question of the difference between the pass-class and fail-class, and the even more important question of the solution inside the pass-class being contextual or definitive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: