Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can someone explain whether there is a principled reason to not use all the sensors available and choose just cameras?


From public information it's a matter of price and "appearance." No one who's half serious about understanding how the system work would use cameras alone, but remember of all the companies who's serious about putting self driving cars on the road, Tesla is the only one who designs, manufactures and sells cars. Incidentally, Tesla is also the only car company pushing camera only as a "solution" which again, it is not. All other outfits are tech pure plays, so their incentive is more geared towards a working system, because that's the only thing they can sell aside from the dream. If Tesla fails with a camera-only self driving car system, they can still sell cars. Source: used to work in robotics, all my classmates from grad school work or have worked at a self driving car company.


It sounds very stupid from their side to be stubborn, at least, that just means they have an existential risk to the whole company, if they don't get their fav solution to work.


Think along the lines of:

1. What is necessary vs sufficient.

2. Processing power.

3. Power consumption.

4. Hardware cost. (even if may have decreased now, the cost of initiating the programme years ago)

5. Training cost / volume of data.

The approach that Tesla is using balances all these things. I can't fully explain why it's so controversial, but I suspect this topic attracts folks from autonomous startups who are using very different approaches, people who repeat what they've heard elsewhere, and the usual anti-Tesla folk.

If you want to know whether Tesla's approach can work, first realise that the sensor suites only help with the 'perception' part of the autonomy problem. Then watch some of the Tesla FSD videos on YouTube and check whether the visualisation seems accurate or not. It's certainly not 100% perfect yet, but it's clear to me that the perception part of the problem is mostly solved. The biggest remaining problems seem behavioural.


As long as Teslas regularly crash into stationary objects because they have no depth sensing system and rely only on camera images, I wouldn't call the perception part of the problem solved.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: