Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ok here it is. An LLM with physical senses. They mentioned sense of touch and I think that is a big deal. You can teach me all you want about a new color with all the text description available and it would be worse than just show me that new color.

In the same way, letting an AI actually touch and interact with the world would do wonders in grounding it and making sure it understand concepts beyond just the words or the bytes it was fed during training. GPT4 can already understand images and text, it should not be long until it takes care of videos and we can say AI has vision. This robot from toyota would have touch. We need hearing and smelling and then maybe we will get something resembling a true AGI.



> Ok here it is. An LLM with physical senses.

See: Pieter Abbeel & Jitendra Malik

https://www.therobotbrains.ai/copy-of-jitendra-malik


There's no reason to expect that such advances will get us closer to a true AGI. I mean it's not impossible, but there's no coherent theory or technical roadmap. Just a lot of hope and hand waving.

I do think that this is an impressive accomplishment and will lead to valuable commercial products regardless of the AGI issues.


> true AGI

What is that? Most humans have general intelligence, but do other apes? Do dogs? A quick google search suggests that the answer is yes.

If that’s the case, then this approach may indeed yield an AGI but maybe it’s not going to be much better than a golden retriever. Truly excellent at some things, intensely loyal (I hope), but goofy at most other things.

Or maybe just as dogs will never understand calculus, maybe our future AGI will understand things that we will not be able to. It seems to me that there’s a good chance that AGI (and intelligence in general) is a spectrum.


Yep, and that’s rather terrifying, is it not? Is there any good reason to assume that future AGI will share our sense of morality, once it is smart enough to surpass human thought?


> Is there any good reason to assume that future AGI will share our sense of morality

I think it would be surprising if it did. Just as our morality is shaped by our understanding of the world and our capabilities, a future AGI's morality would be shaped by its understanding of the world and capabilities. It might do something that we think is terrible but isn't because we lack the capacity to understand why it's doing what it's doing. I'm thinking about how a dog might think going to the vet is punishment but we are actually doing it out of love.


There’s a wonderful novella by Ed Chiang called “the lifecycle of software objects” that addresses your thoughts exactly. Highly recommended.


Thanks. I’ll check it out.


Intelligence in general seems to be a spectrum for animals. Future AGI may be on an entirely different spectrum which isn't directly comparable. We won't know until someone actually builds one and we have a chance to evaluate it.


"True AGI" is often used in a way that means "human-like intelligence, but faster, more consistent and of greater depth". In that case, knowing that embodied agents are the way forward is quite trivial. We've known for a long time that the development of a human brain is a function of its sensory inputs - why would this be any different for an artificial intelligence, especially if designed to mimic/interface with a human one?


That's not the right question to ask. You can construct all sorts of hypotheticals or alternative answers but all of that is meaningless until someone actually builds it.


Why imitate human senses? AI should be able to reach out and touch radio waves, and interpret their meanings, much how we interpret the meaning behind gusts of wind.


We can do both, but clearly the senses that mammals have are well adapted for existence on Earth.


Existence is how you interpret it. An AI’s existence might mean it has a very different interpretation of Earth.


Sure but when I ask you to pass the potato chips I want you to understand what that means and be physically able to do so. The five senses we know well are very well suited for that.


> reach out and touch radio waves

At the point we're describing "touching" massless particles, we might as well say that's what our retinas do. In terms of novel senses, some kind of gravitometric sense would be neat. LIGO-on-a-chip and all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: