One of the thorniest problems facing driverless cars has nothing to do with technology or bureaucracy or personal anxiety over the technology. Instead its an ethical dilemma, a moral quandary. Robots will be placed in a position to make decisions about human life.
Up until recently, this has been the stuff of science fiction. Fodder for Isaac Asimov novels. But driverless cars will force questions that feel more at home in a philosophy classroom into the real world and our everyday lives. And we’ll need to make a decision as a society soon! Autonomous vehicles are navigating our streets today. In the absence of government policy, these ethical and moral questions will be answered by engineers, computer programmers and corporate executives. I trust their technical expertise but the idea that they will be establishing ethical and moral norms is terrifying.
Here’s the problem. Imagine a driverless car is traveling down a street and it suddenly observes five people crossing the street in front of it. The vehicle has two options. It can either continue forward and kill the five people crossing the road OR swerve into a building and kill the passenger of the car. This dilemma was originally posed with a trolley, which is why it is called the trolley problem.
Most people take the utilitarian perspective advocated by philosopher Jeremy Bentham, which is that the car should minimize harm and choose to kill the passenger instead of the pedestrians.
That is all perfectly logical. But when these same people are asked if they would buy that car, they often say they would not. They would only buy a car that saves them at all cost, even if five people are killed in the process.
This bring us to the crux of the problem. If people won’t buy a car that minimizes harm, then either car makers will respond by building cars that protect the passenger at all cost (absent policy to the contrary) or we’ll continue with the status quo, which is far worse that both scenarios.
Fortunately, there are some very smart people exploring and engaging the public on this issue. Iyad Rahwan, an associate professor at the MIT Media Lab, has an excellent TED Talk on the subject. His students have also created a website called the Moral Machine to collect feedback from the public on how we should navigate the moral challenges posed by driverless cars. There is also a superb podcast from the perennially amazing team at RadioLab call the Driverless Dilemma.
Unfortunately, the Federal Government removed “ethical consideration” from their guidance for autonomous vehicles. I can’t say that I’m surprised that the current administration would omit ethics as a consideration in the development of this technology (or any issue), but I am still disappointed. That said, I do think that work on this issues will continue in the academic arena and internationally, even if the US has ceded leadership on the question.