As a counter-argument, proponents of driverless technology only need to point to the data. In the US Department of Transportation report on the aforementioned Tesla accident, it was observed that the activation of Tesla’s autopilot software had resulted in a 40% decrease of crashes that resulted in airbag deployment. Tesla’s Elon Musk regularly tweets links to articles that reinforce this message, such as a link to an article, stating that “Insurance premiums expected to decline by 80% due to driverless cars”.
Why on earth not embrace this technology? Surely it is a no-brainer?
The counter-argument is that driverless cars will probably themselves cause accidents (possibly very infrequently) that wouldn’t have occurred without driverless technology. I have tried to summarise this argument previously - that the enormous complexity and heavy reliance upon Machine Learning could make these cars prone to unexpected behaviour (c.f. articles on driverless cars running red lights and causing havoc near bicycle lanes in San Francisco).
If driverless cars can in and of themselves pose a risk to their passengers, pedestrians and cyclists (this seems to be apparent), then an interesting dilemma emerges. On the one hand, driverless cars might lead to a net reduction in accidents. On the other hand, they might cause a few accidents where they wouldn’t have under the control of a human. If both are true, then the argument for driverless cars is in its essence a utilitarian one. They will benefit the majority, and the question of whether or not they harm a minority is moot.
At this point, we step from a technical discussion to an philosophical one. I don’t think that the advent of this new technology has really been adequately discussed at this level.
Should we accept a technology that, though it causes net benefits, can also lead to accidents in its own right? This is everything but a no-brainer in my opinion.