Should your robot car kill you?

Friday, June 24, 2016
by Cameron Shelley

The arrival of self-driving cars poses many society-technology challenges.  For example, since self-driving cars do not possess deep-seated instincts for self-preservation, they could be programmed to react in almost any way in the event of a collision.  

Such scenarios often revolve around versions of the so-called "trolley problem", in which a person is in a position where they are forced to select which of a small number of fatal outcomes they should prefer.  Would you push one person in the path of a streetcar, thus sacrificing that person, in order to save the lives of other people further down the track?

In the case of a self-driving car, you may imagine a scenario in which a self-driving car is unable to brake for a group of pedestrians on the road.  Should it simply hit and kill them and thus save the car's occupants?  Or, should it swerve into a wall and kill its occupants and thus save the pedestrians?

Perhaps the car should do whatever causes the least injury.  Perhaps it should place special weight on the well-being of the most vulnerable parties, which would be the pedestrians, I assume.

An article just published in Science reveals a tension in people's attitudes in this matter. In brief, the study revealed that people tend to think that self-driving cars should act for the greater good and thus do whatever lessens overall harm the most.  However, people also say that they would rather buy cars that act in their own self-interest, even if that comes at the expense of the greater good.  

One worry generated by this unsurprising result is that people may refuse to adopt self-driving cars that could elect to sacrifice them in certain circumstances.  That would be a shame, critics argue, since those same people might well be safer if everyone actually adopted such cars.  

I am doubtful of the severity of this issue.  Certainly, some people buy vehicles based on the notion that they should protect their occupants at all costs.  Those vehicles are called SUVs.  However, I suspect that people will be more concerned with how productive they will be with a self-driving car, rather than exactly how safe it is.

Hal Hodson argues that such trolley problems are simplistic and reveal very little about how self-driving cars might behave in real life.  Certainly, these scenarios are simplistic.  However, simplification helps to clarify the issues that are at stake.  At the same time, a self-driving car programmed with a few simple principles about how to act in emergencies will produce some surprises.  Those surprises will no doubt be instructive.