Safety is all edge cases. That's especially true of cars; we all know that 99% of the time we can drive with our minds on other things. Our whole transportation infrastructure is engineered toward that end. We can't just handwave away the edge cases, because that's where all the harm happens.
Your heuristic for allowing them where they're demonstrably safer is not a bad one, but too limited. Even if they become equivalent in deaths-per-mile terms (something not demonstrated, and something not currently demonstrable given how opaque and secretive these companies are), other factors matter.
And your fantasy of a world of self-driving cars is a little too fantastical to me. Are current self-driving cars better able to avoid hitting a small dog that runs out into the street? It's possible, but I would doubt it. Will they get and remain there? Again, it's possible. But it's also plausible that some manager's spreadsheet will decide that the lawsuits from owners of dead small dogs will be cheaper than better sensors for a billion self-driving cars.
> Your heuristic for allowing them where they're demonstrably safer is not a bad one, but too limited. Even if they become equivalent in deaths-per-mile terms (something not demonstrated, and something not currently demonstrable given how opaque and secretive these companies are), other factors matter.
What other factors? I think you're wrong about this - at the high level, what matters is do these cars save lives/prevent injuries overall. If you're saying that's not the only thing that matters, what is more important than that?
> Are current self-driving cars better able to avoid hitting a small dog that runs out into the street? It's possible, but I would doubt it.
The way cars are engineered, humans generally can't see a small dog directly in front of the car. Tough to be worse than that.
Remember that most of the complaints about self-driving cars are that they just stop when they don't know what to do. If them stopping too much is the problematic behavior, it's weird to me that you and other folks here are using examples where they might run someone over. It just seems like an attempt to find theoretical fault, which is easy to do.
You named one, injuries. Can you really not think of more?
> humans generally can't see a small dog directly in front of the car
Sure, but they have extensive hardware and data for social cognition. So they could well see a person with a leash running after an out-of-sight dog and infer the dog. They could see a person waving at them to stop because a of a dog. They could hear the dog barking. They could have seen the dog 30 seconds ago and said, "Hey, where did that dog get to?" They could see the dogs in the park and slow down to keep a closer eye out. They could do all manner of things that are well beyond the reach of modern technology.
And I'll note you conveniently skipped over the case where car companies just skipped over worrying about the dog at all because it allowed them to "increase shareholder value".
> You named one, injuries. Can you really not think of more?
No, I can't - I said injuries and deaths. Those are the factors I think we should use to determine if these cars are legal. You said we should consider other factors, then when asked you cited one of my factors instead of other factors then asked me to give you examples of other things. If you think there are other factors to be taken into account, say what they are.
> And I'll note you conveniently skipped over the case where car companies just skipped over worrying about the dog at all because it allowed them to "increase shareholder value".
What are you talking about? The main thrust of complaints about self-driving cars is that they block traffic because they stop when they're not sure what to do. It's literally the opposite of the problem you're describing. You're just making things up and then saying complete nonsense like this.
We should decide whether self-driving cars are legal based on whether or not they make the world safer. That's my stance. Yours seems to be unserious, incoherent nonsense.
Your heuristic for allowing them where they're demonstrably safer is not a bad one, but too limited. Even if they become equivalent in deaths-per-mile terms (something not demonstrated, and something not currently demonstrable given how opaque and secretive these companies are), other factors matter.
And your fantasy of a world of self-driving cars is a little too fantastical to me. Are current self-driving cars better able to avoid hitting a small dog that runs out into the street? It's possible, but I would doubt it. Will they get and remain there? Again, it's possible. But it's also plausible that some manager's spreadsheet will decide that the lawsuits from owners of dead small dogs will be cheaper than better sensors for a billion self-driving cars.