WASHINGTON -- Consider this hypothetical:
It's a bright, sunny day and you're alone in your spanking new self-driving vehicle. You're sitting back, enjoying the view, moving along at the 45 mph speed limit.
As you approach a rise in the road, heading south, a school bus appears, driving north, driven by a human, and it veers sharply toward you. There is no time to stop safely, and no time for you to take control of the car.
Does the car:
–– Swerve sharply into the trees, possibly killing you but possibly saving the bus and its occupants?
–– Perform a sharp evasive maneuver around the bus and into the oncoming lane, possibly saving you, but sending the bus and its driver swerving into the trees, killing her and some of the children on board?
–– Hit the bus, possibly killing you as well as the driver and kids on the bus?
In everyday driving, such no-win choices may be exceedingly rare but, when they happen, what should a self-driving car – programmed in advance -- do? Or in any situation -- even a less dire one -- where a moral snap judgment must be made?
It's not just a theoretical question anymore, with predictions that in a few years, tens of thousands of semi-autonomous vehicles may be on the roads. Investment in the field totals about $80 billion. Companies like Google-affiliated Waymo are working feverishly on them, mobility companies like Uber and Tesla are racing to beat them. Detroit's automakers are placing a big bet on them.
There's every reason for excitement: Self-driving vehicles will ease commutes, returning lost time to workers; enhance mobility for seniors and those with physical challenges; and sharply reduce the number of deaths on U.S. highways each year, now about 35,000.