These kinds of articles are useless without base rates—how often does the median human driver crash per thousand or million miles? How about Waymo? So far, Waymo's safety data looks amazing: https://arstechnica.com/cars/2023/12/human-drivers-crash-a-l...
My fundamental problem with these studies is that they don't separate out reckless drivers (speeding, drunk, etc). This is a problem because widespread (but not universal) adoption of driverless vehicles might not actually address the underlying problem. Instead of forcing people to use driverless cars, the problem might be more effectively solved by forcing auto manufacturers to use GPS-based speed limiting.
And I am not at all convinced that Waymo is safer than a responsible driver who obeys the speed limit, so forcing driverless cars could very well be more dangerous than limiting the speed of human drivers. The worst case scenario is responsible drivers using self-driving because the data told then it was safer (even if it isn't), while irresponsible drivers control their vehicle manually so they can still speed and run red lights.
The other problem, more minor, is that Waymos are relatively new vehicles in good condition, but the human crash rates include a number of mechanical failures that driverless cars haven't experienced yet. My most cognitively demanding driving experience was a tire blowout on the interstate... kind of hard to accumulate 60,000 instances of training data for the AI to learn from.
> And I am not at all convinced that Waymo is safer than a responsible driver who obeys the speed limit, so forcing driverless cars could very well be more dangerous than limiting the speed of human drivers.
I think the value prop is that AI-driver will not get drunk or tired, not that SOTA AI vs Alert/Good human driver is approximately the same. A good human driver can be distracted/tired/influenced by a substance/emotional, all things that drop their performance.