Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm unaware of any proof (in the mathematician sense, for example) that _we_ aren't just kickass machines calculating sequences at varying probabilities, though.

perhaps that is how the argument persists?



Humans do this but this is not all they do. How do we explain humans who invent new concepts, new words, new numerical systems, new financial structures, new legal theories. These are not exactly predictions (since they don't exist in a training set) but they may be composed from such sets.


> How do we explain humans who invent new concepts

Simple: they are hallucinations that turn out to be correct or useful.

Ask ChatGPT to create a million new concepts that weren't in its training data and some of them are bound to be similarly correct or useful. The only difference is that humans have hands and eyes to test their new ideas.


How does NN create a concept not in its training data? (Does it explore negative idea-space?) What if a concept uses a word not yet invented? How does LLM produce that word and what cosine similarity would such a word have if it's never appeared next to others? How would we know if such a word is useful?


Efficiency matters. We do it with a fraction of the processing power.


true in the caloric/watts sense, but we might well have way higher computational power architecturally?


100%, we do a lot more in real life. There are many circumstances, where you work without prior training. This is how new inventions happening everytime.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: