How Artificial Intelligence Might Prevent You From Getting Hired
Published in ACLU
If you applied for a new job in the last few years, chances are an artificial intelligence tool was used to make decisions impacting whether or not you got the job. Long before ChatGPT and generative AI ushered in a flood of public discussion about the dangers of AI, private companies and government agencies had already incorporated AI tools into just about every facet of our daily lives, including in housing, education, finance, public benefits, law enforcement and health care. Recent reports indicate that 70% of companies and 99% of Fortune 500 companies are already using AI-based and other automated tools in their hiring processes. There is increasing use in lower wage job sectors such as retail and food services where Black and Latine workers are disproportionately concentrated.
AI-based tools have been incorporated into virtually every stage of the hiring process. They are used to target online advertising for job opportunities and to match candidates to jobs and vice versa on platforms such as LinkedIn and ZipRecruiter. They are used to reject or rank applicants using automated resume screening and chatbots based on knockout questions, keyword requirements, or specific qualifications or characteristics. They are used to assess and measure often amorphous personality characteristics, sometimes through online versions of multiple-choice tests that ask situational or outlook questions, and sometimes through video game style tools that analyze how someone plays a game. And if you have ever been asked to record a video of yourself as part of an application, a human may or may not have ever viewed it: Some employers instead use AI tools that purport to measure personality traits through voice analysis of tone, pitch and word choice and video analysis of facial movements and expressions.
Many of these tools pose an enormous danger of exacerbating existing discrimination in the workplace based on race, sex, disability and other protected characteristics, despite marketing claims that they are objective and less discriminatory. AI tools are trained with a large amount of data and make predictions about future outcomes based on correlations and patterns in that data -- many tools that employers are using are trained on data about the employer's own workforce and prior hiring processes. But that data is itself reflective of existing institutional and systemic biases.
Moreover, the correlations that an AI tool uncovers may not actually have a causal connection with being a successful employee, may not themselves be job-related and may be proxies for protected characteristics. For example, one resume screening tool identified being named Jared and playing high school lacrosse as correlated with being a successful employee. Likewise, the amorphous personality traits that many AI tools are designed to measure -- characteristics such as positivity, ability to handle pressure or extroversion -- are often not necessary for the job, may reflect standards and norms that are culturally specific, or can screen out candidates with disabilities such as autism, depression or attention-deficit/hyperactivity disorder.
Predictive tools that rely on analysis of facial, audio or physical interaction with a computer are even worse. We are extremely skeptical that it's possible to measure personality characteristics accurately through things such as how fast someone clicks a mouse, the tone of a person's voice or facial expressions. And even if it is possible, predictive tools that rely on analysis of facial, audio or physical interaction with a computer increase the risk that individuals will be automatically rejected or scored lower on the basis of disabilities, race and other protected characteristics.
Beyond questions of efficacy and fairness, people often have little or no awareness that such tools are being used, let alone how they work or that these tools may be making discriminatory decisions about them. Applicants often do not have enough information about the process to know whether to seek an accommodation on the basis of disability, and the lack of transparency makes it more difficult to detect discrimination and for individuals, private lawyers and government agencies to enforce civil rights laws.
Employers must stop using automated tools that carry a high risk of screening people out based on disabilities, race, sex and other protected characteristics. It is critical that any tools employers do consider adopting undergo robust third-party assessments for discrimination, and that employers provide applicants with proper notice and accommodations.
We also need strong regulation and enforcement of existing protections against employment discrimination. Civil rights laws bar discrimination in hiring whether it's happening through online processes or otherwise. That means regulators already have the authority and obligation to protect people in the labor market from the harms of AI tools, and individuals can assert their rights in court. Agencies such as the Equal Employment Opportunity Commission have taken some initial steps to inform employers about their obligations, but they should follow that up by creating standards for impact assessments, notice and recourse, and engage in enforcement actions when employers fail to comply.
Legislators also have a role to play. State legislatures and Congress have begun considering legislation to help job applicants and employees ensure that the uses of AI tools in employment are fair and nondiscriminatory. These legislative efforts are diverse and may be roughly divided into three categories.
First, some efforts focus on providing transparency around the use of AI, especially to make decisions in protected areas of life, including employment. These bills require employers to provide individuals not only with notice that AI was or will be used to make a decision about their hiring or employment, but also with the data (or a description of the data) used to make that decision and how the AI system reaches its ultimate decision.
Second, other legislation requires that entities deploying AI tools assess their impact on privacy and nondiscrimination. This kind of legislation may require impact assessments for AI hiring tools to better understand their potential negative effects and to identify strategies to mitigate those effects. Although these bills may not create an enforcement mechanism, they are critical to forcing companies to take protective measures before deploying AI tools.
Third, some legislatures are considering bills that would impose additional nondiscrimination responsibilities on employers using AI tools and plug some gaps in existing civil rights protections. For example, last year's American Data Privacy and Protection Act included language that prohibited using data -- including in AI tools -- "in a manner that discriminates in or otherwise makes unavailable the equal enjoyment of goods or services on the basis of race, color, religion, national origin, sex, or disability." Some state legislation would ban uses of particularly high-risk AI tools.
These approaches across agencies and legislatures complement one another as we take steps to protect job applicants and employees in a quickly evolving space. AI tools have an increasingly important and prevalent role in our everyday lives, and policymakers must respond to that immediate threat.
Olga Akselrod is a senior staff attorney with the ACLU Racial Justice Program. Cody Venzke is an ACLU senior policy counsel. For more than 100 years, the ACLU has worked in courts, legislatures and communities to protect the constitutional rights of all people. With a nationwide network of offices and millions of members and supporters, the ACLU takes on the toughest civil liberties fights in pursuit of liberty and justice for all. To find out more about the ACLU and read features by other Creators Syndicate writers and cartoonists, visit the Creators website at www.creators.com.
Copyright 2023 Creators Syndicate Inc.