Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Think along the lines of:

1. What is necessary vs sufficient.

2. Processing power.

3. Power consumption.

4. Hardware cost. (even if may have decreased now, the cost of initiating the programme years ago)

5. Training cost / volume of data.

The approach that Tesla is using balances all these things. I can't fully explain why it's so controversial, but I suspect this topic attracts folks from autonomous startups who are using very different approaches, people who repeat what they've heard elsewhere, and the usual anti-Tesla folk.

If you want to know whether Tesla's approach can work, first realise that the sensor suites only help with the 'perception' part of the autonomy problem. Then watch some of the Tesla FSD videos on YouTube and check whether the visualisation seems accurate or not. It's certainly not 100% perfect yet, but it's clear to me that the perception part of the problem is mostly solved. The biggest remaining problems seem behavioural.



As long as Teslas regularly crash into stationary objects because they have no depth sensing system and rely only on camera images, I wouldn't call the perception part of the problem solved.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: