Science & Technology

/

Knowledge

Uncertainty reigns for insurers as industry adopts AI standards

By Zoe Sagalow, CQ-Roll Call on

Published in Science & Technology News

As the insurance industry increasingly relies on artificial intelligence, its state-based regulators are thinking about how to ensure that the technology treats policyholders fairly.

The National Association of Insurance Commissioners last month unanimously adopted guiding principles that AI be fair, ethical, accountable and safe. But these are largely broad-brush notions for a technology that is just starting to be used.

Insurers and rating and advisory organizations "should be responsible for the creation, implementation and impacts of any AI system, even if the impacts are unintended," the NAIC's Innovation and Technology Task Force, part of its Executive Committee, wrote.

Jon Godfread, chairman of NAIC's Artificial Intelligence Working Group, described the principles in an interview as an "aspirational document" or "guidepost" for internal discussions among regulators. They don't carry the weight of law and aren't model regulations.

"This will be the starting point for the next steps of AI regulation," added Godfread, who is commissioner of the North Dakota Insurance Department.

Those next steps could be reporting requirements for insurance companies' usage of AI and a certification process. Certification may mean a third party or internal actuary would run checks to make sure the AI doesn't have implicit biases or commit "proxy discrimination," which occurs when computers make selections based on data that acts as a proxy for race or other traits.

 

Godfread said regulators need more information about whether AI has been a problem. He said the technology has been used to speed internal processes, but the industry hasn't reached the point of having "a lot of consumer-facing decisions being made using artificial intelligence."

Congress studies AI

Rep. Bill Foster, D-Ill., chairman of the House Financial Services Committee's Task Force on Artificial Intelligence, said in an interview that the NAIC is struggling with the same issues as everyone else: determining the details of what's "fair and ethical."

Foster described a scenario where auto insurers might use targeted advertising to maximize profitable customers. If an algorithm is designed to treat young men and young women fairly by charging them the same rates, then women would be more profitable customers because they tend to be less risky drivers. The algorithm would end up finding proxies for gender, such as which websites a person visits.

...continued

swipe to next page
(c)2020 CQ-Roll Call, Inc., All Rights Reserved, Distributed by Tribune Content Agency, LLC.