Politics

/

ArcaMax

Commentary: The human cost of unregulated AI tools

Tomiwa Ilori, Progressive Perspectives on

Published in Op Eds

On Dec. 24, Elon Musk, CEO of xAI, encouraged people to try the Grok chatbot’s new image editing feature. Users quickly began using this tool to sexualize images, mostly of women and in some cases children.

Following Musk’s Dec. 31 posts showcasing Grok-edited images of himself in a bikini and a SpaceX rocket with a woman’s undressed body superimposed on top, requests and outputs surged. Over a nine-day span, Grok generated roughly 4.4 million images on X, nearly half of which contained sexualized imagery of women.

These images included sexually explicit deepfakes of real people and synthetic images not linked to specific individuals. Although xAI’s own terms of service prohibit the “sexualization or exploitation of children” or “violating a person’s privacy,” X and Grok users were able to prompt Grok to create synthetic images of real individuals “undressed,” without their consent and without any apparent safeguards to prevent them from doing so.

The volume and nature of these images suggest these are not cases of fringe misuse, but rather evidence of the lack of meaningful safeguards. Tech companies have recklessly engaged in creating and deploying powerful new AI tools that are causing foreseeable harm.

On Jan. 3, amid global criticism, X promised to take strong action against illegal content including child sexual abuse material. But rather than disable the feature, X on Jan. 9 simply limited it to paid subscribers. On Jan. 14, in addition to other restrictions, it announced blocking of users from jurisdictions where generating images of real people in bikinis or similar attires is illegal.

Human Rights Watch, for which I work, reached out to xAI for comment, but received no response.

In the United States, the state of California opened an investigation into Grok, and attorneys general in 35 states have demanded that xAI immediately stop Grok’s production of sexually abusive deepfakes.

Some other governments have acted quickly to address the threat of sexualized deepfakes. Malaysia and Indonesia temporarily banned Grok, while Brazil asked xAI to curb this “misuse of the tool.” The United Kingdom signaled that it would strengthen its tech regulation in response. The European Commission has opened investigations into whether Grok has met its legal obligations under the European Union’s Digital Services Act. India demanded urgent action, and France expanded a criminal investigation into X.

In its Jan. 14 announcement, X pledged to prevent “the editing of images of real people in revealing clothing” for all users and restrict the generation of images of real people in revealing clothing in jurisdictions where it is illegal. Frankly, this is insufficient, like putting a band-aid on a major wound.

The new U.S. Take It Down Act, which targets the online spread of nonconsensual intimate images, will not fully take effect until May. It imposes criminal liability on individuals who publish such content and requires platforms to implement notice and removal procedures for specific content without holding them accountable for large-scale abuse.

 

Protecting people from AI-driven sexual exploitation demands urgent and decisive action anchored in human rights protection.

First, governments should establish clear responsibilities for AI companies whose tools nonconsensually generate sexually abusive content. They should implement strong and enforceable safeguards, including requiring these companies to incorporate rights-respecting technical measures that block user attempts to produce these images.

Second, platforms that host and integrate AI chatbots or tools should provide clear and transparent disclosures of the way their systems are trained and used, as well as the enforcement actions they take against sexually explicit deepfakes.

Third, AI companies have a responsibility to respect human rights and should actively mitigate any risk of harm from their products or services. Where harm from such products, services or features cannot be mitigated, the companies should consider terminating the product altogether. AI companies cannot simply deflect responsibility onto users when their own systems are being employed to cause harm on an alarming scale.

Finally, AI tools with image generation features should be required to undergo rigorous audits and be subjected to strict regulatory oversight. Regulators should ensure that any content moderation measures comply with the principles of legality, proportionality and necessity.

The surge in AI-generated sexual abuse demonstrates the human cost of inefficient regulation. Unless authorities act decisively and AI companies implement rights-respecting safeguards, Grok will not be the last tool turned against the rights of women and children.

_____

Tomiwa Ilori is a senior tech and human rights researcher at Human Rights Watch. This column was produced for Progressive Perspectives, a project of The Progressive magazine, and distributed by Tribune News Service.

_____


©2026 Tribune Content Agency, LLC.

 

Comments

blog comments powered by Disqus

 

Related Channels

The ACLU

ACLU

By The ACLU
Amy Goodman

Amy Goodman

By Amy Goodman
Armstrong Williams

Armstrong Williams

By Armstrong Williams
Austin Bay

Austin Bay

By Austin Bay
Ben Shapiro

Ben Shapiro

By Ben Shapiro
Betsy McCaughey

Betsy McCaughey

By Betsy McCaughey
Bill Press

Bill Press

By Bill Press
Bonnie Jean Feldkamp

Bonnie Jean Feldkamp

By Bonnie Jean Feldkamp
Cal Thomas

Cal Thomas

By Cal Thomas
Clarence Page

Clarence Page

By Clarence Page
Danny Tyree

Danny Tyree

By Danny Tyree
David Harsanyi

David Harsanyi

By David Harsanyi
Debra Saunders

Debra Saunders

By Debra Saunders
Dennis Prager

Dennis Prager

By Dennis Prager
Dick Polman

Dick Polman

By Dick Polman
Erick Erickson

Erick Erickson

By Erick Erickson
Froma Harrop

Froma Harrop

By Froma Harrop
Jacob Sullum

Jacob Sullum

By Jacob Sullum
Jamie Stiehm

Jamie Stiehm

By Jamie Stiehm
Jeff Robbins

Jeff Robbins

By Jeff Robbins
Jessica Johnson

Jessica Johnson

By Jessica Johnson
Jim Hightower

Jim Hightower

By Jim Hightower
Joe Conason

Joe Conason

By Joe Conason
John Stossel

John Stossel

By John Stossel
Josh Hammer

Josh Hammer

By Josh Hammer
Judge Andrew P. Napolitano

Judge Andrew Napolitano

By Judge Andrew P. Napolitano
Laura Hollis

Laura Hollis

By Laura Hollis
Marc Munroe Dion

Marc Munroe Dion

By Marc Munroe Dion
Michael Barone

Michael Barone

By Michael Barone
Mona Charen

Mona Charen

By Mona Charen
Rachel Marsden

Rachel Marsden

By Rachel Marsden
Rich Lowry

Rich Lowry

By Rich Lowry
Robert B. Reich

Robert B. Reich

By Robert B. Reich
Ruben Navarrett Jr.

Ruben Navarrett Jr

By Ruben Navarrett Jr.
Ruth Marcus

Ruth Marcus

By Ruth Marcus
S.E. Cupp

S.E. Cupp

By S.E. Cupp
Salena Zito

Salena Zito

By Salena Zito
Star Parker

Star Parker

By Star Parker
Stephen Moore

Stephen Moore

By Stephen Moore
Susan Estrich

Susan Estrich

By Susan Estrich
Ted Rall

Ted Rall

By Ted Rall
Terence P. Jeffrey

Terence P. Jeffrey

By Terence P. Jeffrey
Tim Graham

Tim Graham

By Tim Graham
Tom Purcell

Tom Purcell

By Tom Purcell
Veronique de Rugy

Veronique de Rugy

By Veronique de Rugy
Victor Joecks

Victor Joecks

By Victor Joecks
Wayne Allyn Root

Wayne Allyn Root

By Wayne Allyn Root

Comics

Bob Englehart Lee Judge Taylor Jones Joel Pett Dana Summers David M. Hitch