Current News



Programmers, lawmakers want artificiaI intelligence to eliminate bias, not promote it

Kristian Hernández, on

Published in News & Features

DALLAS — When software engineer Bejoy Narayana was developing, an application to help automate Dallas-Fort Worth’s Section 8 voucher program, he stopped and asked himself, ‘‘Could this system be used to help some people more than others?” uses artificial intelligence, known as AI, and automation to help voucher holders find rental units, property owners complete contracting and housing authorities conduct inspections. The software and mobile app were released in 2018 in partnership with the Dallas Housing Authority, which gave Narayana access to data from some 16,000 Section 8 voucher holders.

Artificial intelligence is used in a host of algorithms in medicine, banking and other major industries. But as it has proliferated, studies have shown that AI can be biased against people of color. In housing, AI has helped perpetuate segregation, redlining and other forms of racial discrimination against Black families, who disproportionately rely on vouchers.

Narayana worried that would do the same, so he tweaked his app so that tenants could search for apartments using their voucher number alone, without providing any other identifying information.

As an Indian immigrant overseeing a team largely made up of people of color, Narayana was especially sensitive to the threat of racial bias. But lawmakers in a growing number of states don’t want to rely on the goodwill of AI developers. Instead, as AI is adopted by more industries and government agencies, they want to strengthen and update laws to guard against racially discriminatory algorithms — especially in the absence of federal rules.

Since 2019, more than 100 bills related to artificial intelligence and automated decision systems have been introduced in nearly two dozen states, according to the National Conference of State Legislatures. This year, lawmakers in at least 16 states proposed creating panels to review AI’s impact, promote public and private investment in AI, or address transparency and fairness in AI development.


A bill in California would be the first to require developers to evaluate the privacy and security risks of their software, as well as assess their products’ potential to generate inaccurate, unfair, biased or discriminatory decisions. Under the proposed law, the California Department of Technology would have to approve software before it could be used in the public sector.

The bill, introduced by Assembly Member Ed Chau, a Democrat and chair of the Committee on Privacy and Consumer Protection, passed the California State Assembly earlier this month and was pending in the state Senate at publication time. Chau’s office did not respond to multiple requests for comment.

Vinhcent Le, a lawyer at the Greenlining Institute, an advocacy group focused on racial economic justice, helped write the California legislation. Le described algorithms such as as gatekeepers to opportunity that can either perpetuate segregation and redlining or help to end them.

“It’s great that the developers of decided to omit a person’s name, but we can’t rely on small groups of people making decisions that can essentially affect thousands,” Le said. “We need an agreed way to audit these systems to ensure they are integrating equity metrics in ways that don’t unfairly disadvantage people.”


swipe to next page
©2021 The Pew Charitable Trusts. Visit at Distributed by Tribune Content Agency, LLC.