Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Zero systemic, reproducible mistakes is the only acceptable criteria.

Do you really want to trust a heartless, profit-motivated corporation with 'better than human is good enough'?

What happens when Tesla decides they don't want to invest in additional mistake mitigation, because it's incompatible with their next product release?



Caveat/preface to prevent trolls: FSD is a sham and money grab at best, death trap at worst, etc.

But, I've read through your chain of rplies to OP and maybe I can help with my POV.

OP is replying in good faith showing "this sampling incident is out of scope of production testing/cars for several reasons, all greatly skewing the testing from this known bad actor source."

And you reply with "Zero systemic reproducible mistakes is the only acceptable critera."

Well then, you should know, that is the current situation. In tesla testing, they achieve this. The "test" in this article, which the OP is pointing out, is not a standardized test via Tesla on current platforms. SO be careful with your ultimatums, or you might give the corporation a green light to say "look! we tested it!".

I am not a tesla fan. However, I also am aware that yesterday, thousands of people across the world where mowed down by human operators.

If I put out a test video showing that a human runs over another human with minimum circumstances met, IE; rain, distraction, tires, density, etc., would you call for a halt on all human driving? Of course not, you'd investigate the root cause, which is most of the time, distracted or impaired driving.


The onus is on the replacement to prove itself.

Manual driving is the status quo.

Tesla would like to replace that with FSD. (And make a boatload of money as a result)

My point is that we therefore can (and should!) hold Tesla to higher standards.

'Better than human' as a bar invites conflict of interest, because at some point Tesla is weighing {safety} vs {profit}, given that increasing safety costs money.

If we don't severely externally bias towards safety, then we reach a point where Tesla says 'We've reached parity with human, so we're not investing more money in fixing FSD glitches.'

And furthermore, without mandated reporting (of the kind Musk just got relaxed), we won't know at scale.


> Do you really want to trust...

No, but the regulator helps here - they do their own independent evaluation

> What happens when Tesla decides...

the regulator should pressure them for improvements and suspend licenses for self driving services that don't improve


The regulator doesn't currently exist in a functional capacity.

https://techcrunch.com/2025/02/21/elon-musks-doge-comes-for-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: