Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It keeps changing since our imagination of what tasks requires intelligence are weak. We think that when a computer can do X it can also do Y. But then someone builds a computer that can do X but can't do Y, and we say "oh, so that doesn't require intelligence, let me know when it can do Z and we can talk again.". That doesn't mean that Z means the computer is intelligent, just that Z is a point where we can look at it and discuss again if we made any progress. What we really want is a computer that can do Y, but we make small mini tasks that are easier to test against.

The Turing test is a great example of this. Turing thought that a computer needs to be intelligent to solve this task. But it was solved by hard coding a lot of values and better understanding of human psychology and what kind of conversation would seem plausible when most things are hardcoded. That solution obviously isn't AI, I bet you don't think so either, but it still passed the Turing test.



At what point do we give up and realize that there is no one thing called intelligence, just a bunch of hacks that work pretty well for different things sometimes? I think that's probably where people keep failing here. The reason that we keep failing to find the special thing in every new field that AI conquers is because there's nothing special to actually find? I mean, we could keep moving the goalposts, a sort of intelligence of the gaps argument? But this doesn't seem productive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: