Automotive

/

Home & Leisure

Morality, ethics of a self-driving car: Who decides who lives, dies?

Todd Spangler, Detroit Free Press on

Published in Automotive News

Automakers and suppliers largely downplay the risks of what in philosophical circles is known as "the trolley problem" – named for a no-win hypothetical situation in which, in the original format, a person witnessing a runaway trolley could allow it to hit several people or, by pulling a lever, divert it, killing someone else.

In the circumstance of the self-driving car, it's often boiled down to a hypothetical vehicle hurtling toward a crowded crosswalk with malfunctioning brakes: A certain number of occupants will die if the car swerves; a number of pedestrians will die if it continues. The car must be programmed to do one or the other.

Philosophical considerations, aside, automakers argue it's all but bunk -- it's so contrived.

"I don't remember when I took my driver's license test that this was one of the questions," said Manuela Papadopol, director of business development and communications for Elektrobit, a leading automotive software maker and a subsidiary of German auto supplier Continental AG.

If anything, self-driving cars could almost eliminate such an occurrence. They will sense such a problem long before it would become apparent to a human driver and slow down or stop. Redundancies -- for brakes, for sensors -- will detect danger and react more appropriately.

"The cars will be smart -- I don't think there's a problem there. There are just solutions," Papadopol said.

Alan Hall, Ford's spokesman for autonomous vehicles, described the self-driving car's capabilities – being able to detect objects with 360-degree sensory data in daylight or at night – as "superhuman."

"The car sees you and is preparing different scenarios for how to respond," he said.

Lin said that, in general, many self-driving automakers believe the simple act of braking, of slowing to a stop, solves the trolley problem. But it doesn't, such as in a theoretical case where you're being tailgated by a speeding fuel tanker.

Some experts and analysts believe solving the trolley problem could be a simple matter of regulators or legislators deciding in advance what actions a self-driving car should take in a no-win situation. But others doubt that any set of rules can capture and adequately react to every such scenario.

The question doesn't need to be as dramatic as asking who dies in a crash, either. It could be as simple as deciding what to do about jaywalkers or where a car places itself in a lane next to a large vehicle to make its passengers feel secure or whether to run over a squirrel that darts into a road.

Chris Gerdes, who as director of the Center for Automotive Research at Stanford University has been working with Ford, Daimler and others on the issue, said the question is ultimately not about deciding who dies. It's about how to keep no-win situations from happening in the first place and, when they do occur, setting up a system for deciding who is responsible.

For instance, he noted California law requires vehicles to yield the crosswalk to pedestrians but also says pedestrians have a duty not to suddenly enter a crosswalk against the light. Michigan and many other states have similar statutes.

Presumably, then, there could be a circumstance in which the responsibility for someone darting into the path of an autonomous vehicle at the last minute rests with that person -- just as it does under California law.

But that "forks off into some really interesting questions," Gerdes said, such as whether the vehicle could potentially be programmed to react differently, say, for a child. "Shouldn't we treat everyone the same way?" he asked. "Ultimately, it's a societal decision," meaning it may have to be settled by legislators, courts and regulators.

That could result in a patchwork of conflicting rules and regulations across the U.S.

"States would continue to have that ability to regulate how they operate on the road," said U.S. Sen. Gary Peters, D-Mich., one of the authors of federal legislation under consideration that would allow for tens of thousands of autonomous vehicles to be tested on U.S. highways in the years to come. He says that while design and safety standards will rest with federal regulators, states will continue to impose traffic rules.

Peters acknowledged that it would be "an impossible standard" to eliminate all crashes. But he argued that people need to remember that autonomous vehicles will save tens of thousands of lives a year. In 2015, the consulting firm McKinsey & Co. said research indicated self-driving cars could reduce traffic fatalities by 90 percent once fully deployed. More than 37,000 people died in U.S. roads in 2016 -- the vast majority due to human error.

But researchers, automakers, academics and others understand something else about self-driving cars and the risks they may still pose, namely that for all their promise to reduce accidents, they can't eliminate them.

"It comes back to whether you want to find ways to program in specifics or program in desired outcomes," said Gerdes. "At the end of the day, you're still required to come up with what you want the desired outcomes to be and the desired outcome cannot be to avoid any accidents all the time.

"It becomes a little uncomfortable sometimes to look at that."

The hard questions

While some people in the industry, like Tesla's Elon Musk, believe fully autonomous vehicles could be on U.S. roads within a few years, others say it could be a decade or more – and even longer before the full promise of self-driving cars and trucks is realized.

The trolley problem is just one that has to be cracked before then.

There are others, like those faced by Daryn Nakhuda, the founder and CEO of Mighty AI, which is in the business of breaking down into data for self-driving cars all the objects they are going to need to "see" in order to predict and react. A bird flying at the window. A thrown ball. A mail truck parked so there is not enough space in the car's lane to pass without crossing the center line.

Automakers will have to decide what the car "sees" and what it doesn't. Seeing everything around it – and processing it – could be a waste of limited processing power. Which means another set of ethical and moral questions.

 

Then there is the question of how self-driving cars could be taught to learn and respond to the tasks they are given -- the stuff of science fiction that seems about to come true.

While self-driving cars can be programmed -- told what to do when that school bus comes hurtling toward them --- there are other options. Through millions of computer simulations and data from real self-driving cars being tested, the cars themselves can begin to learn the "best" way to respond to a given situation.

For example, Waymo -- Google's self-driving car arm -- in a recent government filing said through trial and error in simulations, it's teaching its cars how to navigate a tricky left turn against a flashing yellow arrow at a real intersection in Mesa, Ariz. The simulations -- not the programmers -- determine when it's best to inch into the intersection and when it's best to accelerate through it. And the cars learn how to mimic real driving.

Ultimately, through such testing, the cars themselves could potentially learn how best to get from Point A to Point B, just by having them programmed them to discern what "best" means -- say the fastest, safest, most direct route. Through simulation and data shared with real-world conditions, the cars would "learn" and execute the request.

Here's where the science fiction comes in, however.

A computer programmed to "learn" how to play the ancient Chinese game of Go by just such a means is not only now beating grandmasters for the first time in history -- and long after computers were beating grandmasters in chess -- it is making moves that seem counter-intuitive and inexplicable to expert human players.

What might that look like with cars?

At the American Center for Mobility in Ypsilanti, where a test ground is being completed for self-driving cars, President and CEO John Maddox said vehicles will be able to put to the test of what he calls "edge" cases that vehicles will have to deal with regularly – such as not confusing the darkness of a tunnel with a wall or accurately predicting whether a person is about to step off a curb or not.

The facility will also play a role, through that testing, of getting the public used to the idea of what self-driving cars can do, how they will operate, how they can be far safer than vehicles operated by humans, even if some questions remain about their functioning.

"Education is critical," Maddox said. "We have to be able to demonstration and illustrate how AVs work and how they don't work."

As for the trolley problem, most automakers and experts expect some sort of standard to emerge -- even if it's not entirely clear what it will be.

At SAE International -- what was known as the Society of Automotive Engineers, a global standard-making group -- Chief Product Officer Frank Menchaca said reaching a perfect standard is a daunting, if not impossible, task, with so many fluid factors involved in any accident. Speed. Situation. Weather conditions. Mechanical performance.

Even with that standard, there may be no good answer to the question of who dies in a no-win situation, he said. Especially if it's to be judged by a human.

"As human begins we have hundreds of thousands of years of moral, ethical, religious and social behaviors programmed inside of us," he added. "It's very hard to replicate that."

Sticky scenarios

Here are a handful of scenarios to think about when considering the sticky ethical circumstances self-driving cars could find themselves in:

A self-driving vehicle is surrounded -- on one side by a group of pedestrians and, on the other, by just one person. Does it give the larger group more space than the one person to lower the risk of someone stepping in front of it? Does doing so potentially increase the chances of the one being struck?

What about a car being designed to mimic a human preference for giving a large truck passing on the highway more berth just to make its passengers feel safer? Does that increase or decrease the likelihood of striking something on the other side of the car?

If self-driving cars are programmed to never hit a pedestrian under any circumstance, do crosswalks -- or any other spot on a street -- become jammed with pedestrians walking against traffic, knowing these cars will not strike them?

What about animals? If a dog dashes into the street, or a deer, or a squirrel, does it automatically hit it, or speed up or brake suddenly – to veer into another lane – potentially increasing its chances of hitting another vehicle?

Could fully autonomous cars become so prevalent and smart that in reacting to traffic accidents or congestion, they flood into neighborhoods not designed for such heavy traffic? As Patrick Lin, director of the Ethics and Emerging Sciences Group at California Polytechnic State University, wrote in Forbes this summer, "This could increase risk to children playing on these streets, lower property values if road noise is louder, and create other externalities."

Source: Free Press research, Patrick Lin

(c)2017 Detroit Free Press

Visit the Detroit Free Press at www.freep.com

Distributed by Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus