I submitted this because there are a lot of good examples of how driving is not just a process of spotting objects and following lanes, but a social process of interacting with people.
I came across another example a couple of days ago. A guy was telling a story about he was out walking his two bigger dogs in the evening. Across the street, a tiny dog started barking angrily. The tiny dog got away from its owner started to cross the street, heading into traffic. So the guy with the big dogs stepped a bit into the street, waving to attract the attention of an oncoming driver who he thought might not see the small dog ready to run under the wheels.
I have no idea what a robot car might make of that, but I certainly wouldn't want to be the one who has to write the sort of social processing that determines the guy is trying to signal something, so we should stop until we figure out what the signal means.
You get a robodog that is replaceable. If your dog gets totalled you just download it into a new body at home. You're also going to probably need robodog insurance since restoring the robodog ain't cheap /s
> so we should stop until we figure out what the signal means.
We should determine how to handle driverless cars based on their overall safety, not edge cases. If driverless cars have significantly fewer injuries/accidents/etc. per mile driven than human drivers, we should push to expand them. If they only drive better in certain areas, we should geofence them.
If they're safer than human drivers but would hit the guy with the dogs in this case, we should still allow them. The opportunity cost of the other accidents that would've occurred if self-driving cars weren't allowed very much matters. Plus the guy with the dogs shouldn't be stepping out into the street (and the guy with the small dog should be able to maintain control of a small dog).
Also, in a world of self-driving cars, the guy doesn't have to try to get somebody's attention out of fear they won't see the small dog, because self-driving cars don't have the limited visibility issues that a human does.
Safety is all edge cases. That's especially true of cars; we all know that 99% of the time we can drive with our minds on other things. Our whole transportation infrastructure is engineered toward that end. We can't just handwave away the edge cases, because that's where all the harm happens.
Your heuristic for allowing them where they're demonstrably safer is not a bad one, but too limited. Even if they become equivalent in deaths-per-mile terms (something not demonstrated, and something not currently demonstrable given how opaque and secretive these companies are), other factors matter.
And your fantasy of a world of self-driving cars is a little too fantastical to me. Are current self-driving cars better able to avoid hitting a small dog that runs out into the street? It's possible, but I would doubt it. Will they get and remain there? Again, it's possible. But it's also plausible that some manager's spreadsheet will decide that the lawsuits from owners of dead small dogs will be cheaper than better sensors for a billion self-driving cars.
> Your heuristic for allowing them where they're demonstrably safer is not a bad one, but too limited. Even if they become equivalent in deaths-per-mile terms (something not demonstrated, and something not currently demonstrable given how opaque and secretive these companies are), other factors matter.
What other factors? I think you're wrong about this - at the high level, what matters is do these cars save lives/prevent injuries overall. If you're saying that's not the only thing that matters, what is more important than that?
> Are current self-driving cars better able to avoid hitting a small dog that runs out into the street? It's possible, but I would doubt it.
The way cars are engineered, humans generally can't see a small dog directly in front of the car. Tough to be worse than that.
Remember that most of the complaints about self-driving cars are that they just stop when they don't know what to do. If them stopping too much is the problematic behavior, it's weird to me that you and other folks here are using examples where they might run someone over. It just seems like an attempt to find theoretical fault, which is easy to do.
You named one, injuries. Can you really not think of more?
> humans generally can't see a small dog directly in front of the car
Sure, but they have extensive hardware and data for social cognition. So they could well see a person with a leash running after an out-of-sight dog and infer the dog. They could see a person waving at them to stop because a of a dog. They could hear the dog barking. They could have seen the dog 30 seconds ago and said, "Hey, where did that dog get to?" They could see the dogs in the park and slow down to keep a closer eye out. They could do all manner of things that are well beyond the reach of modern technology.
And I'll note you conveniently skipped over the case where car companies just skipped over worrying about the dog at all because it allowed them to "increase shareholder value".
> You named one, injuries. Can you really not think of more?
No, I can't - I said injuries and deaths. Those are the factors I think we should use to determine if these cars are legal. You said we should consider other factors, then when asked you cited one of my factors instead of other factors then asked me to give you examples of other things. If you think there are other factors to be taken into account, say what they are.
> And I'll note you conveniently skipped over the case where car companies just skipped over worrying about the dog at all because it allowed them to "increase shareholder value".
What are you talking about? The main thrust of complaints about self-driving cars is that they block traffic because they stop when they're not sure what to do. It's literally the opposite of the problem you're describing. You're just making things up and then saying complete nonsense like this.
We should decide whether self-driving cars are legal based on whether or not they make the world safer. That's my stance. Yours seems to be unserious, incoherent nonsense.
to me, an argument i haven't heard yet is human dignity.
it's tough to hold onto it (as a human) when you're being threatened and/or injured/maimed/killed by robots.
the idea of yelling at a robot and having it ignore you is pretty fucked up, especially if it's squeezing your coworker to death, or running a dog (or human) over, etc.
But they don't squeeze people to death or run over dogs (from a distance - the physics of cars unfortunately mean that some crashes are unavoidable by either human or machine).
Both Cruise and Waymo have an exceptional ability to detect collisions and brake immediately, to the point where regular users tugging on a door handle unexpectedly can immobilize a car.
"Human dignity" suffers much more when a human kills a pedestrian through sheer carelessness, or a child dies in an entirely preventable crash.
> it's tough to hold onto it (as a human) when you're being threatened and/or injured/maimed/killed by robots.
It's tough to hold onto human dignity if you're dead or maimed because a car hit you. If we can reduce the number of people who are dead or maimed by cars by introducing self-driving ones, why would we not do that? Why is it better to live in a world where more people are dead or maimed and humans are driving than one in which fewer people are dead or maimed and cars are self-driving?
> the idea of yelling at a robot and having it ignore you is pretty fucked up
So what about the inattentive driver looking at their phone? You yell at them, they don't hear you because they're in a car and it's loud, and they hit you and severely injure you. Is that not fucked up?
We're not, so let's say more than that, but it certainly shouldn't be 10x. If you have the ability to replace a car that kills 2 people per 100,000 miles driven (not a real number, just an example) with a car that kills 1 person per 100,000 miles driven, and you don't do that, you're needlessly causing a bunch of people to die.
Color me skeptical of a San Francisco bureaucracy pleading for more time to process approvals against the backdrop of a plebiscite. It's fair to say we need to speak to San Francisco's police and firefighters to get them the data they need to craft cooperation agreements. (For example, an emergency navigation override.)
But absent even a single policy proposal or timeline, and considering the Taxi Workers Alliance board member's involvement, it's difficult to craft a compelling case for why safety is actually--versus theoretically--impeded.
Sorry, what's your model for how safety isn't impacted by firefighters and police having to deal with the sort of incidents described but occurring much more frequently?
> what's your model for how safety isn't impacted by firefighters and police having to deal with the sort of incidents described but occurring much more frequently?
I never said as much. We're extrapolating potential safety risks from anecdotes of interactions. I'm just saying maybe that extrapolation is off.
You said "it's difficult to craft a compelling case for why safety is actually--versus theoretically--impeded". That apparently means you believe there is a world where these incidents happening 10 more frequently [1] will not impede safety. I would like to know what that world is.
But if you can't make a case for a world where there's zero safety impact, then it seems like it would be pretty easy to make a case for a world where there's some safety impact.
Personally, I'd say that a bunch of people whose whole job is public safety saying that public safety is impeded is a decent case that public safety is impeded. But since you apparently believe otherwise, I'm looking for your explanation of how there's zero safety impact.
> means you believe there is a world where these incidents happening 10 more frequently will not impede safety
We don’t know how these interactions scale with deployment and learning. 10x the cars does not necessitate 10x the interactions. And while tying up public workers’ time is a nuisance that can cause harm, that doesn’t mean it will. These are real-world tests: we should have real examples.
> a bunch of people whose whole job is public safety saying that public safety is impeded is a decent case
I guess I’m deeply sceptical of collections of anecdotes on the eve of a vote.
Will this be a challenge for San Francisco’s police and firefighters? Of course. Does that give them a veto? No, particularly not without a game plan other than standing by.
So your model is that 100% of the incidents that can cause harm will not cause harm? No matter how many cars they put on the road? Again, I'd ask you to back that up.
Because otherwise it sounds to me that because of your particular political convictions, you're just handwaving away 100% of the evidence here.
If you get to be skeptical of all of the evidence that's collected because there's an upcoming relevant decision, I get to be at least as skeptical of how your skepticism is applied.
> you're just handwaving away 100% of the evidence here
The evidence is a collection of inconvenient interactions. No actual harm. Contrast that with Tesla: actual harm. In the absence of evidence of actual harm, I don’t see an argument for limiting scope.
Hence actual, measured harm versus theoretical harm one would assume would manifest from the frequency of interactions.
Again, I want to see you explain why those "inconvenient interactions" will never add up to a safety problem, because all I'm seeing is handwaving. In my view, "inconveniencing" a firefighter in the middle of fighting a fire is a clear safety issue Your notion that you have to see blood before taking it seriously is pretty wild to me. In college I knew a guy who regularly drove drunk using pretty similar logic because he'd never had a problem.
“Never add up” is a straw man. If your guy in college drunk driving were hundreds driving drunk every day for years on end, yes, I’d validly (if sceptically) question my priors if there hadn’t been any injuries nor serious property damage. Here, my priors are much softer than those for drunk driving. So when all we have is anecdotes and vague interactions, I lean towards the lack of evidence of actual harm.
I'd expect our public safety people to arrive with real numbers about human vs AV obstructions. SF's bias against change is pretty well-understood at this point, and I'm not at all convinced that this is a good reason to block expansion.
Not that SF has taken this stance, but I think the stance of share your data with us or else we won't approve would be fine by me.
> Also upsetting to firefighters was “zero transparency” of data from self-driving car companies. Because Waymo and Cruise do not disclose internal counts of unexpected stops or other incidents that impede first responders, the fire department is forced to depend on information from members of the public, city employees, firefighters or transit operators, which oftentimes resulted in incomplete or duplicate reports.
Although later in the article they do provide some numbers.
> She said the 55 — and counting — examples that the fire department has cited of driverless cars’ interference with law enforcement or first responder operations “demonstrated that these vehicles themselves are not understanding human traffic control.”
> Just one week ago, that number was 50. “We think that the companies are ready to move forward with broad expansion when that number has gone down and does not continue to go up,” said Friedlander.
One, they've got some data. Two, why is it their job to prove the problem rather than the job of the people proposing the change to demonstrate safety? Three, why is it San Francisco's job to be a proving ground for this?
I think there's pretty good theoretical reason to think automated cars are, asbsent AGI, fundamentally unable to be good traffic participants: traffic is social. And I'm hardly alone here; robot scientist and iRobot founder Rodney Brooks is very skeptical of the current self-driving efforts. His predictions have turned out to be much more accurate than the boosters. And that's before we even get to the question of other impacts.
So I think the burden of proof here should not be on the city. It should be firmly on the people making self-driving cars. Why aren't they arriving with real numbers?
> Why should San Francisco be at the bleeding edge and what will these companies give it in return?
Other than lots of high-paying jobs, tax revenues, and transit options? Other than the pedestrian that dies basically every week to cars, or the severe injury every 14 hours due to cars?
San Francisco is owed nothing. It's a city government; it really should not be in the business of twisting companies arms to extract concessions for imagined slights.
It can pass fair, objective laws in good faith. It has historically been horrible at doing that.
San Francisco is a government that represents the people who live there. If the people of San Francisco want no robot cars until they're reasonably safe, that's a call they should get to make. If that means the loss of the things you mention, all of which I doubt would change much, well, that's a choice they can make. And as an actual San Francisco resident, I personally favor cutting back on the number of robot cars until they're demonstrably much safer.
> If the people of San Francisco want no robot cars until they're reasonably safe, that's a call they should get to make.
Should they also get to issue drivers licenses and car registrations? Decide who gets to live there and who gets forced out? Issue my passport (I'm a citizen of SF)? Where does it stop, in your mind?
San Francisco is a city that oversteps its bounds as frequently as it gets away with. It does not get to make the determination if a car is safe or not; that power resides solely with the State of California and has done so for around a century. Thank god for that, or else the types of NIMBY homeowners who live here would try to ban anyone not on their street from driving there.
> And as an actual San Francisco resident, I personally favor cutting back on the number of robot cars until they're demonstrably much safer.
And as an actual San Francisco resident, SF drivers are insane sociopaths and I would gladly trust my life to a robotaxi far more than I'd trust it to my fellow city residents.
I think people hate these cars because they drive the speed limit and stop at stop signs, with a healthy dose of "clearly big tech is responsible for all our problems."
These cars are already demonstrably safer than a human driver. If the city cares about preventing fires, they'd go after what's almost definitely the #1 cause of fires - homeless encampments. If they cared about road safety, they'd go after the #1 cause of traffic incidents - human drivers who virtually never get ticketed.
About every two months I think "oh, maybe I'll ride my bike to the office." Every two months, I nearly fucking die to some entitled driver who swerves into the bike lane to skip traffic, park, or just because they're on their phone.
Frankly, you are an extraordinarily privileged SF resident if you can pretend these cars are not an improvement. I am pleased that the city has no power to ban these cars from the road; it's the same set of people I've mentioned above.
> So I think the burden of proof here should not be on the city. It should be firmly on the people making self-driving cars. Why aren't they arriving with real numbers?
The city has a demonstrated track record of acting in bad faith when it comes to tech companies and employees. I do not fault these companies for not voluntarily sharing data, because it's gonna be misinterpreted and taken out of context as much as possible.
> One, they've got some data.
They have data that's been taken out of context (i.e how many times do these types of incidents happen when humans are driving vs how often do they happen with SDCs). SFMTA has also previously willfully misinterpreted data to pretend that these AVs are more dangerous than they really are:
> Two, why is it their job to prove the problem rather than the job of the people proposing the change to demonstrate safety?
Because Cruise, Waymo, et. al. are demonstrating safety using an objectively good metric: collisions, injuries, and deaths. It's the city's job to measure secondary safety characteristics and ask companies to optimize for them, or to just ticket cars that block fire stations.
> Three, why is it San Francisco's job to be a proving ground for this?
Because no city has the right to block immigration of workers or companies moving in, and the city of San Francisco does not and should not have authority that supersedes the DMV or CPUC to regulate the basic operation of vehicles in the city. If the city wants to ticket companies that are blocking fire stations, it should be more than capable of legislating that on its own.
> I think there's pretty good theoretical reason to think automated cars are, asbsent AGI, fundamentally unable to be good traffic participants: traffic is social. And I'm hardly alone here; robot scientist and iRobot founder Rodney Brooks is very skeptical of the current self-driving efforts. His predictions have turned out to be much more accurate than the boosters. And that's before we even get to the question of other impacts.
To be blunt, I think you've misread this. Level 5 self-driving is irrelevant - Level 4 is what Cruise and Waymo are currently targeting, and they have no plans to move to Level 5 now or before they become profitable. I can anecdotally tell you that as a pedestrian or cyclist, I feel _much_ safer around these cars because I know they see me without needing to look. There's much less need for social interactions when cars have close to perfect knowledge of their surroundings.
That post also *rants* about exactly this type of "safe enough" discourse: human drivers are so wildly unsafe that the instant a company can provide better top-level safety metrics it would be immoral to restrict them.
You cannot ride in one of these cars and then say "automated cars are fundamentally unable to be good traffic participants." They're frankly already better traffic participants than most human drivers in SF, who act like utter sociopaths.
> So I think the burden of proof here should not be on the city. It should be firmly on the people making self-driving cars. Why aren't they arriving with real numbers?
They do arrive with real numbers. Like I mentioned: collisions, injuries, and deaths. CPUC has also requested a lot of this data during this hearing, which AV companies have agreed to provide to the state but not the city - you might be interested in the resolutions proposed by Cruise and Waymo, and the CPUC requests for information:
You probably don’t live in SF and have never seen the incidents on the news where the police and fire agencies have tried to move a driverless car in an emergency.
Right? Like a firefighter trying to put out a burning car should have to get into some "your call is important to us" phone queue just to get the robot car off the firehose.
They have emergency override controls and trained procedures around it for elevators; I don’t see much of a leap for autonomous cars, assuming the override process isn’t itself ridiculous.
Give firefighters a way to completely take control for 10 miles or something with a relatively low barrier to entry seems like a perfect way to eliminate vast numbers of weird scenarios
The law is that vehicles need to give right of way to emergency vehicles. If autonomous cars repeatedly demonstrate that they cannot do that, then take their license away, just like you would for a natural person. Why do legal persons get to externalize the difficulty of their autonomous driving problem onto the police and fire departments (and the affected citizenry)?
Sure, but you don’t need to handover full capability for it to be sufficient. That’s why I noted an arbitrary limit on mileage once the procedure is engaged — you can imagine all sorts of arbitrary other limitations; eg once in 24 hrs, limited featureset, auto-enabled tracking, flags to server that it’s been engaged, etc.
The potential for abuse of a car driving around with nobody in it is massive. The thing is you can already DoS them with a knife. Just poke a little hole in the tires and that thing isn't going anywhere. You don't even need to book a ride or even get in it.
Indeed, yet the other replies seem to be missing the insanity: any mechanism that permits this sort of override has another name: “backdoor”. If this capability exists widely enough for all emergency personnel to use it, then it would only be a matter of time until the mechanism is cracked wide open and abused.
"He’s still disgruntled by an incident last weekend in the Richmond District, during which an autonomous vehicle entered the scene and parked between a fire engine and a vehicle on fire."
If we introduced regular cars for the first time in this political environment, we'd never allow them on the roads. Our existing cars on the road severely impede emergency vehicles, through regular stand-still traffic, double-parking without anyone in the car, parking in front of hydrants, etc.
We have learned to ignore and work around those issues, but it's hard to do the same with novel issues that we give people the power to block and delay.
I think they should just put those expensive-ass push bars to work that they have on their vehicles. It would take all of 2-3 minutes so shove the misbehaving car out of the way. And if it gets damaged, so what? Maybe Cruze having to come out and repair totalled cars every time they block an emergency scene, might get them to do something about it. I mean 55 wrecked cars would put a nice dent in their fleet!
It’s probably already in the law that the police/fire can move an abandoned vehicle, and a driverless car with no driver is obviously abandoned, soooooo…
Yeah I mean I thought it was pretty common for fire fighters to smash windows and tow cars that are blocking hydrants. I don't see this as any different. Like is a $20,000 car that's part of a giant fleet worth saving when someone's life or health is at risk?
Tesla doesn't have this problem. I think the approach of progressively autonomous non-geofenced driver assist software is better than geofenced driverless.
Tesla has a fleet of millions of cars, collecting 1m+ miles per day of real-world data. When Cruise encounters a novel situation, it halts traffic. When a Tesla running FSD does, the human takes over, it gets reported as an edge case, edge cases are labeled and bucketed by frequency and tackled in that order, failing unit tests are written, the entire fleet is queried for data on similar situations, the models are trained, the edge case is fixed, and an update is pushed to the entire fleet over the air.
Not this particular problem, no. Tesla has other problems, like trying to drive into pedestrians, or semi-truck trailers, or fire engines...
But at least Teslas don't slow down traffic!
In seriousness, though, it would appear that Tesla's handling of edge cases isn't quite as neat and clean as your comment implies. That might be the spec, but the implementation appears to this outside observer to be a fair bit different.
You're basically describing one way AV companies already deal with issues. I'll describe a typical pipeline based on broad public information:
When an autonomous vehicle encounters a situation, it will initially attempt to resolve it automatically. Sometimes that fails, so it will fall back to asking a human for guidance. Depending on the company and the vehicle there may or may not be a local safety driver available as a secondary backup.
In either case, that situation will be flagged. The amount of detail varies depending on whether there's a human taking notes. One major difference from Tesla here that all of the high resolution cameras, the LIDARs, the radars, etc will be recorded and made available, alongside the full logging data for the ride. Depending on the car and the software it's running, stack tracing data may also be available.
Some of the consumers of this data collect these incidents into buckets to identify issues. This is used to determine things like rollout success (any particular company will have multiple "fleets" running the equivalent of canary/beta/stable) and for finding edge cases. Once one is identified (say through a news story), they'll search for similar events. From there, they can generate simulated test cases based on real incidents or do things more manually. There are multiple kinds of test cases as well. All subsequent updates will run against those test cases. Updates will be deployed to the vehicles when they're not on road. The exact frequency varies, but it's typically higher than Tesla's update cadence.
If I take it as axiomatic that Tesla's approach carries a higher level of mortal risk, and I boil this down in a reductive way, I interpret you as saying: it's better to increase the loss of life and limb than slow auto traffic.
I came across another example a couple of days ago. A guy was telling a story about he was out walking his two bigger dogs in the evening. Across the street, a tiny dog started barking angrily. The tiny dog got away from its owner started to cross the street, heading into traffic. So the guy with the big dogs stepped a bit into the street, waving to attract the attention of an oncoming driver who he thought might not see the small dog ready to run under the wheels.
I have no idea what a robot car might make of that, but I certainly wouldn't want to be the one who has to write the sort of social processing that determines the guy is trying to signal something, so we should stop until we figure out what the signal means.