Automotive

/

Home & Leisure

Morality, ethics of a self-driving car: Who decides who lives, dies?

Todd Spangler, Detroit Free Press on

Published in Automotive News

Last month, Sebastian Thrun, who founded Google's self-driving car initiative, told Bloomberg that the cars will be designed to avoid accidents, but that "If it happens where there is a situation where a car couldn't escape, it'll go for the smaller thing."

But what if the smaller thing is a child?

How that question gets answered may be important to the development and acceptance of self-driving cars.

Azim Shariff, an assistant professor of psychology and social behavior at the University of California, Irvine, co-authored a study last year that found that while respondents generally agreed that a car should, in the case of an inevitable crash, kill the fewest number of people possible regardless of whether they were passengers or people outside of the car, they were less likely to buy any car "in which they and their family member would be sacrificed for the greater good."

Self-driving cars could save tens of thousands of lives each year, Shariff said. But individual fears could slow down acceptance, leaving traditional cars and their human drivers on the road longer to battle it out with autonomous or semi-autonomous cars. Already, the American Automobile Association says three-quarters of U.S. drivers are suspicious of self-driving vehicles.

"These ethical problems are not just theoretical," said Patrick Lin, director of the Ethics and Emerging Sciences Group at California Polytechnic State University, who has worked with Ford, Tesla and other autonomous vehicle makers on just such issues.

 

While he can't talk about specific discussions, Lin says some automakers "simply deny that ethics is a real problem, without realizing that they're making ethical judgment calls all the time" in their development, determining what objects the car will "see," how it will predict what those objects will do next and what the car's reaction should be.

Does the computer always follow the law? Does it slow down whenever it "sees" a child? Is it programmed to generate a random "human" response? Do you make millions of computer simulations, simply telling the car to avoid killing anyone, ever, and program that in? Is that even an option?

"You can see what a thorny mess it becomes pretty quickly," said Lindberg. "Who bears that responsibility? ... There are half a dozen ways you could answer that question leading to different outcomes."

The trolley problem

...continued

swipe to next page

--Sponsored Video--


Comments

blog comments powered by Disqus