Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[flagged] Tesla blows past stopped school bus and hits kid-sized dummies in FSD tests (engadget.com)
149 points by ndsipa_pomu 7 months ago | hide | past | favorite | 161 comments


Why are Tesla related posts still being flagged? Mr Musk stepped out of his governmental role so criticism of his assets is no longer unavoidably political. My understanding was that criticism of Tesla was for the past few months seen as a political action and that many here don't want any inflammatory political discussions about the current US administration, but what's the current reason for flagging? This is surely tech/business news through and through.


Despite everything he has done Musk still has a number of hardcore followers who excuse every action and attack any criticism.


The comments become unreadable because everyone just argues over musk, so people just flag the whole thing.


Ironically, most of the mentions of Musk in this thread are due to the flagging.


Maybe the flaggers own Tesla stock. HN commenters have admitted TSLA value is based on hype not fundamentals. Negative titles are bad for hype.

But these stories are still reaching the front page, so arguably the flaggers are up against the algorithm and the algorithm is winning.


If you follow Tesla for any length of time, you'll find so many disingenuous slanted articles about the company and the cars that you'll begin to wonder what the hell is going on.

This article is about "testing" conducted by a guy who is trying to sell competing software products and has produced videos in the past that weren't replicable and were likely staged. They never release unedited footage or car diagnostic data, just a produced video presented with the intent of damaging Tesla's reputation and boosting their own sales.

This happens all the time, from the New York Times' false claims that their review car ran out of charge, to the most recent Mark Rober video that cost the CEO of a lidar company his job.

The video in this article requires independent validation it is not from a trusted source as it is produced by an extremely biased and financially motivated Tesla competitor.


> This article is about "testing" conducted by a guy who is trying to sell competing software products and [...]

What competing software products?


This is the product. https://www.ghs.com/products/auto_adas.html

The founder went as far as to run for CA senator just to regulate Tesla and bought a Superbowl ad to smear the company. https://techcrunch.com/2024/02/12/dawn-project-tesla-ad-fsd-...


Ah yes, the classic Tesla promoter smear.

Somehow operating systems and software development tools are in direct competition with Tesla ADAS software. Turns out Tesla is not in competition with GM, it is actually in competition with Linux, GCC, and Windows. Amazing.

The smear campaign to silence whistleblowers is still going strong.


> "requires a fully attentive driver and will display a series of escalating warnings requiring driver response."

I understand the reasoning behind it, but watching the video () of the test shows that the car did not warn the driver, and even if it did it was speeding too much leaving almost no time for a driver to respond

Disclaimer- I have never used FSD before

() https://dawnproject.com/the-dawn-project-and-tesla-takedowns...


> but watching the video () of the test shows that the car did not warn the driver

The warnings occur when you look away or don't touch the steering wheel for a while. Not saying that Tesla is without error (it isn't), but just clarify what the warnings are for.


> The warnings occur when you look away

So they are useless. My car warns me even if i don't look.


> So they are useless. My car warns me even if i don't look.

No, they serve a very specific purpose -- (attempting) to ensure the driver is at the controls and paying attention.

Don't confuse the FSD attention "nag" warnings with collision warnings. The collision warnings will sound all the time, with and without FSD enabled and even if you're not looking at the road. If you don't start slowing down quickly, Automatic Emergency Braking (AEB) will slam on the brakes and bring the car to a stop.


So they are useless. My car warns me even if i don't look.

Heck, my car not only warns you, it slams on the brakes for you.

Scared the heck out of me when it happened, but it saved me from hitting something I didn't realize was so close.


The real problem is that it didn't recognize and stop for the stop signs on the school bus. The child is basically an afterthought designed to appeal to the emotion of those whom logic fails. Even if no kids materialized the way a bus stop works (bus stops, then kids cross) means that detecting the kid really shouldn't be the primary trigger for stopping in this situation, the stop sign needs to be recognized and acted upon. Any ability to detect and stop for pedestrians is secondary to that.


I don't agree with this. Not hitting pedestrians should not just be an afterthought. Of course the car should recognise the stop sign, but there are cases in which stop signs are obstructed or missing, and in those cases pedestrians should still not be hit by a car.


Yes, but recognizing a pedestrian when he jumps in front of your car is useless -- you don't have time to stop anyway.

What you WANT to recognize is conditions when such an event is possible (obstructed vision) and to slow down in advance even if you don't see/detect any pedestrians at the moment.

This obviously includes the case with the school bus and the stop sign but, as you correctly point out, is not limited to that. There are more cases when a pedestrian, especially a child, can jump under your car from behind a big vehicle or an obstacle.

Recognizing these situation and slowing down in advance is a characteristic trait of a good-intentioned experienced driver. Though I think that most of the time it's not a skill you have straight out of driving courses, it takes time and a few close calls to burn it into your subconsciousness.


At 25 mph, which I would hope would be the speed limit on roads next to schools, slamming on the brakes even seconds before colliding with children can make an enormous difference in how fast the car is going when it hits the kid.

Speed is the factor in collisions (other than weight), and modern brakes are incredibly good.

Not to mention that the car, with it's 360 degree sensors, could safely and efficiently swerve around the children even faster than it can brake, as long as there's not a car right next to you in another lane -- and even if there is, hitting another car is far less dangerous to their life than hitting the children is to yours.

These things should be so much better than we are, since they're not limited by unidirectional binocular vision, but somehow they're largely just worse. Waymo is, at best, a bit better. On average.


> At 25 mph, which I would hope would be the speed limit on roads next to schools

I regularly drive on a two lane 55mph highway that school buses stop on and let kids out.

It runs through a reservation and has no sidewalks at all.

> modern brakes are incredibly good.

They're probably not worth that much of a superlative, and they're fundamentally limited by the tires.

This is just a pet peeve of mine, since it is used by people to argue that modern vehicles are so much vastly better than cars in the 1980s that we should be able to drive at 90mph like it is nothing.

But reaction times and kinetic energy are a bitch, and once traction control / stability assist hits its limits and can't bail you out, you might find out the hard way that you're not as good of a driver as you think you are, and your brakes won't stop you in time.

> Speed is the factor in collisions

This I will definitely agree with. Say it louder for everyone in the back.


25mph is 36 feet per second, about the length of a school bus. The stopping distance at 25mph is 30 feet, assuming perfect reaction time and dry pavement. Human reaction time is about 750ms, so stopping distance is about 2 school bus lengths. You don't have seconds.

25mph is too fast for any street where kids may jump out behind parked cars. Not just school zones, but all residential streets. There's a knee at about 20mph in the traffic fatality stats where below that speed pedestrian collisions are unlikely to be fatal. Above 20mph fatalities become more common quite quickly.


> The stopping distance at 25mph is 30 feet,

Stopping is nice, but not the entire point of braking. The lower the collision speed, the better.

> Human reaction time is about 750ms

No. Human reaction time is around 250ms on average. What you point to is the time it takes to react to an unexpected stimuli. The number I've seen quoted is about 1s. But that assumes a completely unexpected event that you're not prepared for at all.

So if you're mindlessly passing a school bus at 25mph, then a 1s delay is expected. But if you're doing so with your foot covering the brake while hyper focused on reacting to any sudden movement in front of you, you can do much, much better than 1s. Of course, at that point you might as well drive correctly by slowing down.


This is why school buses flash yellow warning lights before deploying the stop sign and opening the doors.

It should never be the case that someone is surprised by an instantaneous bus stop. The are plenty of context clues in advance. Including the fact that the bus is around at all, which should already heighten attention.


Even if you won't prevent the collision, reducing the velocity at which it happens is still very desirable, And, given that kinetic energy of the car is proportional to velocity squared, even a little bit of reduction means a lot less energy dumped into the pedestrian.


Sure, if you've hit the brakes by then. The reaction time delay means you travel significant distance before hitting the brakes.


For the knee to be useful you have to go below it -- thus 15 mph is a much better neighborhood and school speed limit.


To add a data point. In Belgium, the max speed around schools is enforced to be 30 km/h. That should be a bit less than 20mph.


I wasn't talking about human reaction time.


Kids don't get dropped off only at school. They get dropped of at their homes and can be on 55 mph roads. You are quibbling with the main point anyway. The whole reason for the stop signs on busses is because kids will be kids and really its too late when they run out.


There are a lot of videos Waymo has posted of split second swerves they’ve had to do in SF and Austin. It looked to me like a combination of hard braking and swerving could have avoided the collision. Now to be fair to Tesla, the dummies in this test didn’t look very realistic, but not even slowing down for the school bus shows that FSD is not close to being ready for unsupervised use.


>I don't agree with this. Not hitting pedestrians should not just be an afterthought.

You're disagreeing with something I didn't say. There's a difference between afterthought and the primary initiator of the stop in a situation like this.

>Of course the car should recognize the stop sign, but there are cases in which stop signs are obstructed or missing, and in those cases pedestrians should still not be hit by a car.

The surprise pedestrian test is one that any vehicle can be made to fail by sheer physics. Avoiding errant pedestrians like in the video will likely only come as a byproduct of better situational behavior by self driving vehicles. The overwhelming majority of drivers know to ignore the speed limit if the situation is rife with pedestrians or otherwise sus and are generally fine with missing/obstructed stop signs. I don't know what route self driving software will take to approximate such behavior but it likely will need to.


> The overwhelming majority of drivers know to ignore the speed limit if the situation is rife with pedestrians or otherwise sus and are generally fine with missing/obstructed stop signs.

This sentence makes me want to cry. Sure, I may know drivers treat speed limits as minimum acceptable speed at best, but seeing it unironically spelled out like this hurts.

Please, it's just two simple words: 1) speed 2) limit. L I M I T. It's the maximum allowable speed. No, it's not the minimum speed, no it's not the target speed, no, it's not "more of a guideline than anything". You are not allowed to go faster.

It says nothing about a minimum speed. It says nothing about what speed you should drive at. All it does is limit the maximum speed. Repeat after me, it's a speed L I M I T.

Just kidding though, I know even speaking like I would to a 5 year old won't do it, this mentality runs far too deep. There's no hope.


> The surprise pedestrian test is one that any vehicle can be made to fail by sheer physics.

There's different degrees of failure as well, did the Tesla try to brake beforehand or apply brakes after hitting the doll?


Bah. The car should slow down if the view is restricted or when it's passing a bus, especially a school bus, regardless if there's a stop sign or not.


Merely slowing down is not enough. If the car doesn't come to a complete stop and remain stopped while the bus has its stop sign extended, it's driving illegally.


They're saying if there's a school bus inside of a narrow corridor, that a prudent driver would slow down and use caution IN ANY CIRCUMSTANCE.

They're obviously not arguing that the car shouldn't stop with a sign deployed.

Arguing from a point of bad faith doesn't advance the discussion.


I'm saying slowing down is better, but not enough. If the manufacturer can program the system to slow down when it sees a school bus inside of a narrow corridor, then it can also program the system to (correctly) stop in that situation.


I want them to drive legally, but I also want them to be able to react to objects even if they don't have signs on them.

Signs are often obstructed by trees or are simply inadequate for safe driving, even when combined with the rules of the road. Any even "partially" automated driving should be able to trivially recognize when its view is compromised and proceed accordingly.


Totally agree. "Stop for a stopped school bus" and "Drive slow enough to stop in an emergency, in areas where pedestrians are obscured" are two separate, barely related problems that car companies need to reliably solve.


Ignoring a stop sign, not even slowing down, thats a major safety flaw.

I am wondering if there is a safety certification body for self driving technology. If not, one is needed because consumers can't be expected to be aware of all the limitations of the latest update they have installed.

There must be basic safety standards these systems need to meet, a disclaimer can't be the solution here.


There isn’t. We won’t regulate until there is public outcry over tens or hundreds of deaths.


Or you pay Trump more money than Elon.


I think the idea of self-driving needs to be strongly re-evaluated. There are countless videos of people in their Tesla's driving down the road.. from the back seat. FSD is simply not feasible right now, but it seems that when people let Tesla Take the Wheel, they are easily duped into assuming it will always work - when it doesn't.

Until there are federal standards and rigorous testing of self-driving vehicle technologies they should not be installed, or advertised.


Just for currency: I can't hop in the back of my 2025 Tesla while FSD drives. It has an interior camera now. It complains if I'm looking outside my window too long.


Regardless of one's stance on Tesla, it's sad to see this post flagged.


Before anyone says it's on the responsibility of the driver, that's only while there is still a driver.

https://www.reddit.com/r/teslamotors/comments/1l84dkq/first_...


This has done the rounds on other platforms. A couple of important points:

- the failure here is that the car didn't stop for the bus on the other side of the road with the extended stop sign. (Obviously a kid running out from behind a car this late is fairly difficult for any human or self driving system to avoid)

- the FSD version for the robotaxi service is private and wasn't the one used for this test. The testers here only have access to the older public version, which is supposed to be used with human supervision

- the dawn project is a long-time Tesla FSD opponent that acts in bad faith - they are probably relying on false equivalence of FSD beta vs robotaxi FSD

Nevertheless this is a very important test result for FSD supervised! But I don't like that the dawn project are framing this as evidence for why FSD robotaxi (a different version) should not be allowed without noting that they have tested a different version.


I don't see why Tesla would deserve the benefit of the doubt here. We cannot know how well the actual Taxi software will work, I think it is fair to extrapolate from the parts we can observe.


re. extrapolation: I agree with that, but remember there's sampling error. The crashes/failures go viral but the lives saved get zero exposure or headlines. I don't think that means you can just ignore issues like this but I think it does mean it's sensible to try to augment the data point of this video with imagining the scenarios where the self driving car performs more safely than the average human driver


I absolutely do think that self-driving cars will save many lives in the long run. But I also think it is entirely fair to focus on the big, visible mistakes right now.

This is a major failure, failing to observe a stop sign and a parked school bus are critical mistakes. If you can't manage those you're not ready to be on the road without a safety driver yet. There was nothing particularly difficult about this situation, these are the basics you must handle reliably before we even get to alle the tricker situations those cars will encounter in the real world at scale.


I agree it's a major mistake + should get a lot of focus from the FSD team. I'm just unsure whether that directly translates to prohibiting a robotaxi rollout (I'm open to the possibility it should though).

I guess the thing I'm trying to reconcile is that even very safe drivers make critical mistakes extremely rarely, so the threshold at which FSD is safer than even the top 10% of human drivers likely includes some nonzero level of critical mistakes. Right now Tesla has several people mining FSD for any place it makes critical mistakes and these are well publicised so I think we get an inflated sense of their commonality. This is speculation, but if true it leaves some possibility of it being significantly safer than the median driver while still allowing for videos like this to proliferate.

I do wish Tesla released all stats for interventions/near misses/crashes so we could have a better and non-speculative discussion about this!


Zero systemic, reproducible mistakes is the only acceptable criteria.

Do you really want to trust a heartless, profit-motivated corporation with 'better than human is good enough'?

What happens when Tesla decides they don't want to invest in additional mistake mitigation, because it's incompatible with their next product release?


Caveat/preface to prevent trolls: FSD is a sham and money grab at best, death trap at worst, etc.

But, I've read through your chain of rplies to OP and maybe I can help with my POV.

OP is replying in good faith showing "this sampling incident is out of scope of production testing/cars for several reasons, all greatly skewing the testing from this known bad actor source."

And you reply with "Zero systemic reproducible mistakes is the only acceptable critera."

Well then, you should know, that is the current situation. In tesla testing, they achieve this. The "test" in this article, which the OP is pointing out, is not a standardized test via Tesla on current platforms. SO be careful with your ultimatums, or you might give the corporation a green light to say "look! we tested it!".

I am not a tesla fan. However, I also am aware that yesterday, thousands of people across the world where mowed down by human operators.

If I put out a test video showing that a human runs over another human with minimum circumstances met, IE; rain, distraction, tires, density, etc., would you call for a halt on all human driving? Of course not, you'd investigate the root cause, which is most of the time, distracted or impaired driving.


The onus is on the replacement to prove itself.

Manual driving is the status quo.

Tesla would like to replace that with FSD. (And make a boatload of money as a result)

My point is that we therefore can (and should!) hold Tesla to higher standards.

'Better than human' as a bar invites conflict of interest, because at some point Tesla is weighing {safety} vs {profit}, given that increasing safety costs money.

If we don't severely externally bias towards safety, then we reach a point where Tesla says 'We've reached parity with human, so we're not investing more money in fixing FSD glitches.'

And furthermore, without mandated reporting (of the kind Musk just got relaxed), we won't know at scale.


> Do you really want to trust...

No, but the regulator helps here - they do their own independent evaluation

> What happens when Tesla decides...

the regulator should pressure them for improvements and suspend licenses for self driving services that don't improve


The regulator doesn't currently exist in a functional capacity.

https://techcrunch.com/2025/02/21/elon-musks-doge-comes-for-...


It should get a lot of focus from the regulator, not "the FSD team".


Tesla and this administration operate under the Boeing model: surely the manufacturer knows best.


agreed - I said FSD team to distinguish from "the crowds" but this was the wrong wording, should be the regulator too.


No! Ignoring a stop sign is such a basic driving standard that it's an automatic disqualification. A driver that misses a stop sign would not have my kids in their car. They could be the safest driver on the racetrack it does not matter at that point.


I agree with that, but remember there's sampling error.

Ma'am, we're sorry your little girl got splattered all over the road by a billionaire's toy. But, hey, sampling errors happen.


Also they've repeatedly tested closer and closer distances until Tesla failed aka p-hacking.


In the video (starting at ~13 seconds), the Tesla is at least 16 and probably 20 car lengths from the back of the bus with the bus red flashing lights on the entire time.

If the Tesla can't stop for the bus (not the kid) in 12 car lengths, that's not p-hacking, that's Tesla FSD being both unlawful and obviously unsafe.


I thin their test is valid and not in bad faith because they demonstrate the Teslas self driving technology has no basic safety standards.

Your argument that a newer version is better simply because it's newer does not convince me. The new version could still have that same issue.


> Your argument that a newer version is better

I actually didn't say that and am not arguing it formally - I said what I said because I think that the version difference is something that should be acknowledged when doing a test like this.

I do privately assume the new version will be better in some ways, but have no idea if this problem would be solved in it - so I agree with your last sentence.


> […] I think that the version difference is something that should be acknowledged when doing a test like this.

Did they anywhere refer to this as Robotaxi software over calling it FSD, the term Tesla has chosen for this?


I don't think there's an official divergence in terminology other than the version numbers (which have mostly stopped incrementing for FSD vehicle owners, meanwhile there is a lot of work going into new iterations for the version running on tesla's robotaxi fleet)


Then I struggle to understand what they should have acknowledged here concerning the software used? That they do not have access to a version of FSD which currently isn’t accessible to the public? I’d think that’s self-evident for any Organisation not affiliated with Tesla.


It is also a failure that it does a cruise and continues to drive over the thing/child that was hit.


that should have been in my list, you're right


Everytime Tesla's FSD is shown to be lacking someone always says "well that's not the real version and these people are bias!"


Don't forget the standard "And the next version surely will be much better!"


it's always the next version that will go from catastrophic failure to being perfect. This card has been played 100 times over the last few years.


Exactly what I wanted to add. Every single time there is hard evidence what a failure FSD is, someone points out that they didn't use the latest Beta. And of course they provide zero evidence that this newer version actually addresses the problem. Anyone who knows anything about Software and updates understand how new versions can actually introduce new problems and new bugs …


excatly this


> Obviously a kid running out from behind a car this late is fairly difficult for any human or self driving system to avoid

That's why you slow down when you pass a bus (or a huge american SUV).


In this case, you stop for the bus that is displaying the flashing red lights.

Every driver/car that obeys the law has absolutely no problem avoiding this collision with the child, which is why the bus has flashing lights and a stop sign.


I'm honestly shocked that all versions of Tesla assisted/self-driving don't have a hardcoded stopped schoolbus exception to force slow driving.

Worst case, you needlessly slow down around a roadside parked schoolbus.

Best case, you save a kid's life.


If it merely slowed down, it would still be driving illegally. Everywhere I've driven in the USA, if there is a stopped school bus in front of you, or on the other side of the road, all traffic on the road must stop and remain stopped until the school bus retracts the sign and resumes driving.


Point being, if it always slowed down (sign or no sign), then at least reaction time requirements would be lessened.


I mean, it can be programmed in any way, so why not program it to follow both rules: If the sign is extended, then stop. Else if the sign is not extended, slow down.


if(bus) slow

if(bus and sign) stop

Then at least it fails slightly safer.


if I put a stop sign somewhere is that legal? or there's some statute (or at least local ordinance) that says that yellow buses on this and this route can have moving stop signs?


Obviously, there is law that forms the basis for this. In Massachusetts, it's MGL c. 90 § 14

https://www.mass.gov/info-details/mass-general-laws-c90-ss-1...


Yes, in the US, school buses come equipped with a stop sign that they extend when stopping to let kids on and off the bus. You (as a driver) are required to stop (and not pass) on both sides of the road while the bus has their sign extended.


It's almost as if we need human-level intelligence to be actually able to reason about the nuances of applicability of traffic signs :).


Exactly. If your view is obstructed by something, anything, you slow down.


> Obviously a kid running out from behind a car this late is fairly difficult for any human or self driving system to avoid

Not really - it's a case of slowing down and anticipating a potential hazard. It's a fairly common situation with any kind of bus, or similarly if you're overtaking a stationary high-sided vehicle as pedestrians may be looking to cross the road (in non-jay-walking jurisdictions).


Yes. And given the latency of cameras (or even humans) not being able to see around objects and that dogs and kids and move fast from hidden areas into the path, driving really slow next to large obstacles until able to see behind them becomes more important.

One of the prime directives of driving for humans and FAD systems must be "never drive faster than brakes can stop in visible areas". This must account for such scenarios as obstacles stopped or possible coming the wrong way around a mountain turn.


> never drive faster than brakes can stop in visible areas

Here in the UK, that's phrased in the Highway Code as "Drive at a speed that will allow you to stop well within the distance you can see to be clear". It's such a basic tenet of safety as you never know if there's a fallen tree just round the next blind corner etc. However, it doesn't strictly apply to peds running out from behind an obstruction as your way ahead can be clear, until suddenly it isn't - sometimes you have to slow just for a possible hazard.


And that should be a case where automated systems can easily beat people. It's "just" math, there's no interpretation, no reasoning, no reading signs, no predicting other drivers, just crunching the numbers based on stopping distance and field of view. If they can't even do this, what are they for?


The simulation of a kid running out from behind the bus is both realistic, and it points out another aspect of the problem with FSD. It didn't just pass the bus illegally. It was going far too fast while passing the bus.

As for being unrepresentative of the next release of FSD, we've had what eight years ten years of it's going to work on the next release.


sounds like a tesla problem for naming their crappy tech "full self driving"


Some important counter-points:

- FSD has been failing this test publicly for almost three years, including in a Super Bowl commercial. It strains credulity to imagine that they have a robust solution that they haven't bothered to use to shut up their loudest critic.

- The Robotaxi version of FSD is reportedly optimized for a small area of austin, and is going to extensively use tele-operators as safety drivers. There is no evidence that Robotaxi FSD isn't "supposed" to be used with human supervision, its supervision will just be subject to latency and limited spatial awareness.

- The Dawn Project's position is that FSD should be completely banned because Tesla is negligent with regard to safety. Having a test coincide with the Robotaxi launch is good for publicity but the distinction isn't really relevant because the fundamental flaw is with the companies approach to safety regardless of FSD version.

- Tesla doesn't have an inalienable right to test 2-ton autonomous machines on public roads. If they wanted to demonstrate the safety of the robotaxi version they could publish the reams of tests they've surely conducted and begin reporting industry standard metrics like miles per critical disengagement.


>(Obviously a kid running out from behind a car this late is fairly difficult for any human or self driving system to avoid)

Shouldn't that be the one case where self driving system has an enormous natural advantage? It has faster reflexes, and it doesn't require much, if any, interpretation or understanding of signs or predictive behavior of other drivers. At the very worst, the car should be able to detect a big object in the road and try to brake and avoid the object. If the car can't take minimal steps to avoid crashing into any given thing that's in front of it on the road, what are we even doing here?


I agree - it's why I think some type of radar/lidar is necessary for autonomous vehicles to be sufficiently safe. I get the theory that humans can process enough info from two eyes to detect objects, so multiple cameras should be able to do the same, but it looks like it's a tough problem to solve.


Does the Tesla taxi option afford radar/lidar?

My understanding is that Tesla is the only manufacturer trying to make self-driving work with just visual-spectrum cameras --- all other vendors use radar/lidar _and_ visual-spectrum cameras.


They've painted themselves into a corner really, as if they start using anything other than just cameras, they'll be on the hook for selling previous cars as being fully FSD-capable and presumably have to retrofit radar/lidar to anyone that was mis-sold a Tesla.


"an appreciating asset" IIRC


...and radar and mics and Google's geospatial data and 15 years of experience.


... Wait, so you think that Tesla have a child-killing and a non-child-killing version, but are only providing the child-killing version to consumers?

... eh? I mean, what?


I’ve seen this “next version” trick enough times and in enough contexts to know when to call BS. There’s always a next version, it’s rarely night-and-day better, and when it is better you’ll have evidence, not just a salesman’s word. People deserve credit/blame today for reality today, and if reality changes tomorrow, tomorrow is when they’ll deserve credit/blame for that change. Anybody who tries to get you to judge them today for what you can’t see today is really just trying to make you dismiss what you see in favor of what you’re told.


Sorry, I am failing to see how any of these three points are relevant or even important.


I don't know why this is flagged unless it's just the Tesla/Musk association. I thought that self-driving vehicles are a popular topic on HN.


It's obviously because of Musk. Anything that paints him in a bad light is flagged ad nauseam.


It seems crazy to me that the U.S. allows beta software to be used to control potentially lethal vehicles on public roads. (Not that I have much faith in human controlled lethal vehicles)


[flagged]


Advance what? Car dependence?


Advance robotics, automation and artificial intelligence as well as the state of the art in transportation. And yes, that obviously includes automobile transportation. What year are you living in? 1892? Do you realize how impossible modern society would be without cars?


Surely you jest? The downside is it kills and maims people, including children. That’s not an acceptable trade off.

Most countries have authorities who feel the same way. If you think killing and maiming is “advancing”, then you have a weird view of society.


I don't think they're joking.

> If you think killing and maiming is “advancing”, then you have a weird view of society.

Classic accelerationism. "Some of you may die, but it's a sacrifice I am willing to make."


People are dying already. The status quo is death, you understand that, yes? Human error at the wheel has cost millions in lives around the planet over the course of my lifetime. 1.2 million die around the world every single year.

Classic decel shortsighted thinking.


"The answer to people dying is for people to die in the service of people dying."

Gotcha.


'Go tell it from the mountainhead'? /s


You do understand that tens of thousands die on the road in the U.S. every year as status quo, right? The status quo is that human error kills that many. That's the upside and the need to test and iterate in actual real world environments is obvious. That's why FSD is supervised. Very simple.


Lyft are doing a better job (and more safely) than Tesla.


> It's the only way to get the task done

Well, that's complete baloney. You're either stupid or being disingenuous if you can't think of any other approach.

> What is your country doing to advance?

Well, we're still in the grip of motornormativity, but we are installing more active travel facilities to reduce congestion, pollution and lives lost, despite the extremely loud opposition from the car lobbies.

The problem with personal car-shaped vehicles is that they don't scale in built up areas. As more people drive, congestion builds up until mistaken planners decide to increase the width/number of roads. That entices more people to drive which then increases the congestion further. As more people drive and more space is taken up by roads and parking, facilities get spread further apart until they become only reachable by cars. This then feeds into the loop to increase car usage which increases congestion which leads to more roads and more parking. Rinse and repeat.


> You're either stupid or being disingenuous if you can't think of any other approach

Please don't comment like this here, no matter how right you are or think you are.


Sorry


The thing armchair auto-focused urbanists don't understand is the infeasible cost of changing urban placement: because it accretes over decades/centuries.

It's trivially easy for a city to paint itself into a corner from which it's (financially) impossible to escape, with 'best intentions' at every step.

Next thing they know, major employment hubs are too far apart, with major buildings that cannot be relocated, without continguous right of ways, and... they're fucked.


It's instructive to look at how the Netherlands went from car-focussed design in the 1970s to their modern multi-modal designs nowadays. It just needs the political will, which in the Netherlands was helped with the Stop The Child Murder campaign (Stop de kindermoord).

https://en.wikibooks.org/wiki/Transportation_Planning_Casebo...


It's another Dan O'Dowd test. I'm now convinced that all of his tests are elaborate fakes, since he has a constant stream of super dangerous failures, but none of them are reproducible for anyone else.


How much of Tesla's FSD is learned from watching a lot of recordings of human drivers and how much of it is explicitly coded rules?

I'd expect school bus stop signs to be challenging to learn because of how context dependent they are compared to regular stop signs. Some examples, drawn from the rules in my state (Washington) are below. These may be different in other states.

With a regular stop sign you stop, then you go when it is safe to do so.

When you stop for a school bus sign you are supposed to stay stopped as long as the sign is there. You don't go until the school bus retracts the sign. It is essentially like a red light rather than a stop sign.

Whether or not a school bus stop sign applies to you depends on the direction you are traveling and on the structure of the roadway.

It applies if either of the following are true: (1) you are traveling in the same direction as the bus, (2) it is a two lane roadway and the lanes are not separated by a physical barrier or median. Three or more lanes or a barrier or median and cars traveling opposite the bus don't have to stop.


Just a data point, no argument:

My 2025 Tesla stops for road work flaggers spinning their signs from "Slow" to "Stop".

This issue here, correctly, is that it should come to a stop for 1) a bus flashing red, 2) with or without stop signs, 3) on an undivided road. Or, in our automated future, at least come to a crawl like FSD does now when entering a parking lot or pedestrian likely location.


this is a dan o'dowd production. he's spent significant capital trying to take down autopilot/fsd for years and has played dirty in the past.


Why on earth can't this be done by normal testing agencies? Why do things like "Tesla Takedown" have to participate in it? Even if the test was 100% legit, that connection to mere protest movements taints it immediately. It's like when oil companies publish critical research on climate change. Or Apple publishing research that AI is not that good and their own fumblings should not be seen as a bad omen. This kind of stuff could be factually completely correct and most rational people would still immediately dismiss it due to conflict of interest. All this will do is flame up fanboys who were already behind it and get ignored by people who weren't. If real goal is to divide society, this is how you do it.


You mean the ones gutted by said owner of said company?

Comparing "Tesla Takedown" with ExxonMobile is way too quick, you should have said Greenpeace. I'd say that TT has to do this, Is part of the point.


In normal times, perhaps, today…

https://www.theverge.com/news/646797/nhtsa-staffers-office-v...

When regular in theory bipartisan mechanisms fail, protest is all you have left.


> NHTSA staffers evaluating the risks of self-driving cars were reportedly fired by

Elon Musk, who also owns Tesla.


You seem to ignore the historical reasons why testing agencies exist in the first place. There wasn't a consensus back then that they even needed to exist. People like Ralph Nader needed to hammer the point again and again and again to will those standards into existence. The pressure groups fighting against Tesla are doing the same harsh difficult job that Nader did.

https://en.wikipedia.org/wiki/Unsafe_at_Any_Speed%3A_The_Des...


Well quite, it's an indictment of americas institutions if investigations held in the public interest must be conducted by third parties and not the establishment.

What other hidden dangers slip by without public knowledge.


If only there some proven way to hold abusive power to account in the public consciousness. https://en.m.wikipedia.org/wiki/The_Jungle


In my view, the onus to conduct tests and prove that it is safe to use should be conducted by the manufacturer and should meet the requirements of the regulating authority (presumably NHTSA in the U.S.) and appropriate city/states if they agree to allow limited public road usage.


In theory, that sounds great, but in practice, why should we trust manufacturers to reliably test their own products?


I agree, but it's fairly common practise. I think a "trust, but verify" approach should be used and jail board members for attempts to fool the regulators (c.f. VW emissions fraud)


The problem is that the manufacturer of course has a very strong incentive to stand with themselves. People don't advocate against themselves, and companies are no different.

When J&J found out about potential asbestos contamination in their baby powder in the 70s, they managed to convince the FDA that they would research and handle it. It took until 2020 for it to come to light that they did not do that, and that, in fact, their baby powder was contaminated.

They ran multiple studies, and some of them even showed that the amount of asbestos in their product was dangerous. But those studies never saw the light of day, and the company acted in a self-preserving manner. It's a similar story with 3M and byproducts of Teflon.

But, federal or state agencies have no alliance to a company's bottom line. They don't have the same incentives to lie or downplay. So, I think, it only makes sense that they should be responsible for testing directly, not just supervising.

I also think we need to adopt some legislation so that we must test products before we release them. You may be shocked to know you're allowed to release new chemical products without proving their safety, and you can even release new food products without proving their safety. Most of these products end up being okay in the long run, but some we have to retroactively ban. It would be easier for everyone if we begin in a banned state.


I do agree with that. It's a complicated trade-off between spending resources to police companies Vs doing all the testing.

Companies will usually be doing a lot of internal testing on new products, so they'll have a lot of the necessary tech and processes already in place. The trick is to ensure that faking results is penalised enough to make it not worth the risk. Most of the time it's cheaper to trust companies and then focus on the safety stats, though that fails with your examples of non-obvious issues.


It's NOT a complicated tradeoff at all.

The point is you either willingly spend the resources to independently test things for safety or companies WILL kill you to save a dollar.

This has been proven time and time again, big and small. We have the FDA entirely because a small company made "medicine" by buying medicine powder and putting it in a solvent that was acutely toxic that they didn't even think to give to an animal or something first. Literally just sold a random liquid to people as "medicine" and it killed a hundred people in utter agony.

Leaded gas was never a technical requirement. We could have been using Ethanol as our anti-knock/octane booster since the very beginning and never poison anyone but quite literally, the company chose instead to put Tetraethyl lead into gas because they could patent it, despite it being literally poison.

Same with PFOAs from 3M, who knew it was lethally toxic to mammals at fairly low doses decades and decades ago, but made no attempt to tell anyone or notify anyone or even reduce how much they were pouring it into waterways upstream of small communities. When they finally got sued by a lawyer who dug this all up in discovery, they finally said "Okay we will replace it" and replaced it with a nearly identical chemical that is just as toxic to mammals that they hope will break down easier in the environment, but how long will that take to demonstrate is false, and what are they going to switch to then?

Nixon made the EPA because companies would rather everyone die than change literally anything they've developed about their process or product, because they don't care about people dying. This idea of "bad press" or people will just stop using your products has demonstrably failed.

So suck it up, pay some taxes, test things for safety, and stop letting people die for such minuscule boosts in private company profits.


The difference in staffing between trust+verification and no-trust is miniscule, because any savings are things that are trusted and unverified.

Why not take an objective, fact-based regulatory approach from the start?


I've read somewhere that the NHTSA people working on mandatory tests for self driving were fired by DOGE.


Tesla has been lying for years. Why would you trust them to conduct their own tests? That's completely backwards.

Independent tests are what's needed, and preferably done by agencies who won't get gutted because they find something that's inconvenient.


For the same reason that nonprofit consumer advocacy and safety organizations like Consumer Reports and the Center for Auto Safety exist. Lobbyists and wealthy private individuals exert a lot of influence over the operations of publicly funded oversight agencies. They have the power to censor these agencies or as we've seen recently, fire all their staff, defund them or close them completely.

As for the fanbois and f*cbois, they have always existed and will always exist. They are the pawns. Smart mature people learn to lead, direct, manipulate, deflect and ignore them.


Or cigarette companies having doctors show that cigarettes don't cause cancer. It turns out that the bar for science is really really high, and for stuff that's less obvious and takes time and lots of money and effort, it's really hard to show. You'd think I was a loon if I told you gravity wasn't a decided matter of physics, since we're all glued to this Earth every day, but we still don't know that dark matter and the ramifications for the theory of gravity are. Science still doesn't know how to test psychedelic medicine because it's obvious to the control group that they're on the placebo. That doesn't mean we should lower the bar for science, just that we should be aware of shortcomings in our beliefs.

So if you want to get into the details and figure out why a video from "Tesla Takedown" should or should not be believed, in all ears, but I'm some random on the Internet. I don't work at NHTSA or anywhere that could affect change based on the outcome of this video. It's not going to affect my buying decisions one way or another, but it'll only divide people who have decided this matters to them and can't get along with others.


Which testing agencies are doing these tests?



That's crash testing, which is about the safety of the people inside the vehicle.

This test is about whether Tesla's self driving technology is safe for the people outside the vehicle.


>That's crash testing, which is about the safety of the people inside the vehicle.

Crash testing is much more than that. Check out NCAP for example. They specifically include safety ratings for other, vulnerable road users (i.e. pedestrians). And the Model 3 currently sits at the top spot of their 2025 rankings in that category.


That's still after the person outside the vehicle has been struck.

This test shows the self driving software disobeys traffic laws (the school bus stop sign requires everyone else to stop), resulting in a pedestrian collision that would not have happened had traffic laws been followed.


The same agencies that do crash testing generally do all manner of other tests. Rollover testing, brake distance testing, restraint systems, drive aids, car seat fitment, etc, etc.


> The tests were setup by anti-Tesla organizations

:)

Any good reason to believe these tests?


[flagged]


TBH, kids in a schoolbus are obviously going to public school.


Many of the private schools around here (eastern MA) run school buses that transport students to/from school.


[flagged]


No you can’t. If FSD ignores a stop sign on a school bus the messenger isn’t important.


Unless the stop sign was facing the other direction and so on. There are many details that somebody has to get right to create a realistic scenario.

If this test is real it does not look good on Tesla.


Kids jumping in front of cars is realistic in itself. Throw in a school bus and the car should've stopped, no ifs.

And Tesla didn't even do an emergency break.


If there’s a stop sign, the car should come to a full stop irrespective of the presence of children.

That’s not a hard scenario to get right: all you need is a school bus with a deployed stop sign. Which is the case in this test.


The issue is the context. A stop sign can be a in a shop's window as it was on that famous video where the tesla cmae to a full stop at a shop that had a stop sign.


It looks real on the video that is included in the linked article.


And that why reading further is important.

It’s either fake or a massive issue.


You should keep reading and check the data, not just the source. If the data is real then it doesn't matter the bias, what a stupid brainless take...


Hopefully the dummies were from the board of directors


Isn’t FSD modeled after real drivers?

In that case it could’ve learnt almost nobody ever fully stops at a stop sign, or in this case a bus stopped with a stop sign.


Blowing the stop signals on a school bus is one of the few traffic laws that I see overwhelmingly support for strict adherence.

People are way more accepting/understanding of drunk driving than passing school bus stop signals.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: