The future of artificial intelligence (AI) is here: self-driving cars, grocery-delivering drones and voice assistants like Alexa that control more and more of our lives, from the locks on our front doors to the temperatures of our homes.
But as AI permeates everyday life, what about the ethics and morality of the systems? For example, should an autonomous vehicle swerve into a pedestrian or stay its course when facing a collision?
These questions plague technology companies as they develop AI at a clip outpacing government regulation, and have led Seattle University to develop a new ethics course for the public.
Launched last week, the free, online course for businesses is the first step in a Microsoft-funded initiative to merge ethics and technology education at the Jesuit university.
Seattle U senior business-school instructor Nathan Colaner hopes the new course will become a well-known resource for businesses "as they realize that 1/8AI3/8 is changing things," he said. "We should probably stop to figure out how."
The course -- developed by Colaner, law professor Mark Chinen and adjunct business and law professor Tracy Ann Kosa -- explores the meaning of ethics in AI by looking at guiding principles proposed by some nonprofits and technology companies. A case study on facial recognition in the course encourages students to evaluate different uses of facial-recognition technology, such as surveillance or identification, and to determine how the technology should be regulated. The module draws on recent studies that revealed facial-analysis systems have higher error rates when identifying images of darker-skinned females in comparison to lighter-skinned males.
The course also explores the impact of AI on different occupations.
The public's desire for more guidance around AI may be reflected in a recent Northeastern University and Gallup survey that found only 22% of U.S. respondents believed colleges or universities were adequately preparing students for the future of work.
Many people who work in tech aren't required to complete a philosophy or ethics course in school, said Quinn, which he believes contributes to blind spots in the development of technology. Those blind spots may have lead to breaches of public trust, such as government agencies' use of facial recognition to scan license photos without consent, Alexa workers listening to the voice commands of unaware consumers and racial bias in AI algorithms.
As regulations on emerging technology wend through state legislatures, colleges, such as University of Washington and Stanford University, have created ethics courses to mitigate potential harmful effects. Seattle University's course goes a step further by opening a course to the public.