AI Is Infringing on Your Civil Rights. Here's How We Can Stop That
Searching for an apartment online, applying for a loan, going through airport security or looking up a question on a search engine -- you might not think anything of these exchanges other than that they are mundane things you do, but, in many of these instances, you're actually interacting with artificial intelligence.
Avoiding AI in our quotidian activities feels impossible, especially when it is now used by public and private organizations to make decisions about us in hiring, housing, welfare, budgeting and other high-stakes areas. While proponents of AI usage boast about how efficient the technology is, the decisions it makes about us are oftentimes uncontestable and discriminatory and infringe on our civil rights.
However, inequity and injustice from artificial intelligence need not be our status quo. Sen. Ed Markey and Rep. Yvette Clarke have just reintroduced the AI Civil Rights Act of 2025, which will help ensure AI developers and deployers do not violate our civil rights. The ACLU strongly urges Congress to pass this bill so we can prevent AI systems from undermining the equal opportunities our civil rights gave us decades ago.
Why Do We Need the AI Civil Rights Act of 2025?
The AI Civil Rights Act shores up existing civil rights law so their protections now apply to artificial intelligence.
Whether you are looking at the Civil Rights Act of 1964, The Fair Housing Act, The Voting Rights Act, the Americans with Disabilities Act or a multitude of other civil rights statutes, current civil rights laws may not be easily enforced against discriminatory AI. In many cases, individuals may not even know AI was used, deployers may not be aware of its discriminatory impact and developers may not have tested the AI model for discriminatory harms. By covering AI harms in several consequential areas -- employment, education, housing, utilities, health care, financial services, insurance, criminal justice, identity verification and government welfare benefits -- the AI Civil Rights Act provides interlocking protections against discrimination, testing protocols and notice requirements in numerous sectors for people who have their civil rights eroded by AI systems.
Ensuring AI Doesn't Become a Tool for Discrimination
One of the most important aspects of the AI Civil Rights Act is that it will allow us to better defend against discriminatory AI outputs. A decision from an AI model often can appear objective, but when you open up the algorithm, it can have a disparate impact on protected groups. Disparate impact, in the context of artificial intelligence, is a form of discrimination where an AI model disproportionately harms one group over another in its decision-making and has been seen within health care, financial services, education, criminal justice and other significant areas.
Unfortunately, disparate impact claims can be onerous to bring forward. For one, to prevail on a disparate impact claim, plaintiffs need to statistically demonstrate that an algorithm disproportionately harms a protected group and that a less discriminatory practice exists. However, the difficulty of meeting this burden can be exacerbated when AI companies refuse to disclose their algorithms for these evaluations by claiming they are "trade secrets." For another, not all civil rights laws give people the private right of action to file a disparate impact claim, and President Donald Trump is constantly rolling back the use of disparate impact in civil rights enforcement. This continual weakening of disparate impact protection makes it even more difficult to file AI-related discrimination claims.
To help with this, the AI Civil Rights Act addresses algorithmic discrimination by making it explicitly unlawful for AI developers or deployers to offer, license, promote, sell or use an algorithm in critical life areas such as housing and employment that causes or contributes to a disparate impact. Centering disparate impact in the AI Civil Rights Act ensures that concrete protections exist for individuals affected by discriminatory AI models.
Transparency and Accountability in AI Systems
Beyond safeguarding against AI-powered discrimination with disparate impact protections, the AI Civil Rights Act gives us the transparency we desperately need from AI developers and deployers. The AI Civil Rights Act requires developers, deployers and independent auditors to conduct pre-deployment evaluations, impact assessments and annual reviews of their algorithms. These evaluations will be critical in helping determine whether a model has harmful effects on people's civil rights and where, if at all, it can be deployed in a specific sector.
The AI Civil Rights Act also brings clarity to the long-debated question of who should be held accountable for the civil rights harms caused by algorithmic systems. If passed, the AI Civil Rights Act will make developers and deployers the parties responsible for taking reasonable steps to ensure their AI models do not violate our civil rights. These steps can include documenting any harms that can arise from the model, being fully transparent with independent auditors, consulting with stakeholders who are impacted by AI models, guaranteeing that the benefits of using an algorithm outweigh the harms and more. If developers and deployers are found violating the act, they risk facing civil penalties, fees and other consequences at federal, state and individual levels. The accountability mechanisms in the act are pivotal to empowering individuals against algorithmic harm while ensuring that AI developers and deployers understand that it is their duty to have low-risk models.
What Is Next?
If we want our AI systems to be safe, trustworthy and nondiscriminatory, the AI Civil Rights Act is how we start.
"AI is shaping access to opportunity across the country," says Cody Venze, ACLU senior policy counsel. "'Black box' systems make decisions about who gets a loan, receives a job offer, or is eligible for parole, often with little understanding of how those decisions are made. The AI Civil Rights Act makes sure that AI systems are transparent and give everyone a fair chance to compete."
Jo Gasior-Kavishe is an intern at the ACLU's National Political Advocacy Department in Democracy and Technology. For more than 100 years, the ACLU has worked in courts, legislatures and communities to protect the constitutional rights of all people. With a nationwide network of offices and millions of members and supporters, the ACLU takes on the toughest civil liberties fights in pursuit of liberty and justice for all. To find out more about the ACLU and read features by other Creators Syndicate writers and cartoonists, visit the Creators website at www.creators.com.
Copyright 2025 Creators Syndicate Inc.





















Comments