Automotive

/

Home & Leisure

Morality, ethics of a self-driving car: Who decides who lives or dies?

Todd Spangler, Detroit Free Press on

Published in Automotive News

How that question gets answered may be important to the development and acceptance of self-driving cars.

Azim Shariff, an assistant professor of psychology and social behavior at the University of California, Irvine, co-authored a study last year that found that while respondents generally agreed that a car should, in the case of an inevitable crash, kill the fewest number of people possible regardless of whether they were passengers or people outside of the car, they were less likely to buy any car "in which they and their family member would be sacrificed for the greater good."

Self-driving cars could save tens of thousands of lives each year, Shariff said. But individual fears could slow down acceptance, leaving traditional cars and their human drivers on the road longer to battle it out with autonomous or semi-autonomous cars. The American Automobile Association says three-quarters of U.S. drivers are suspicious of self-driving vehicles.

"These ethical problems are not just theoretical," said Patrick Lin, director of the Ethics and Emerging Sciences Group at California Polytechnic State University, who has worked with Ford, Tesla and other autonomous vehicle makers on just such issues.

While he can't talk about specific discussions, Lin says some automakers "simply deny that ethics is a real problem, without realizing that they're making ethical judgment calls all the time" in their development, determining what objects the car will "see," how it will predict what those objects will do next and what the car's reaction should be.

Does the computer always follow the law? Does it slow down whenever it "sees" a child? Is it programmed to generate a random "human" response? Do you make millions of computer simulations, simply telling the car to avoid killing anyone, ever, and program that in? Is that even an option?

 

"You can see what a thorny mess it becomes pretty quickly," Lindberg said. "Who bears that responsibility? ... There are half a dozen ways you could answer that question leading to different outcomes."

Automakers and suppliers largely downplay the risks of what in philosophical circles is known as "the trolley problem" –– named for a no-win hypothetical situation in which, in the original format, a person witnessing a runaway trolley could allow it to hit several people or, by pulling a lever, divert it, killing someone else.

In the circumstance of the self-driving car, it's often boiled down to a hypothetical vehicle hurtling toward a crowded crosswalk with malfunctioning brakes: A certain number of occupants will die if the car swerves; a number of pedestrians will die if it continues. The car must be programmed to do one or the other.

Philosophical considerations, aside, automakers argue it's all but bunk -- it's so contrived.

...continued

swipe to next page
 

Comments

blog comments powered by Disqus