Current News



High school in Orange County, California, investigates 'inappropriate' AI-generated images of students

Hannah Fry, Los Angeles Times on

Published in News & Features

LOS ANGELES — Laguna Beach High School administrators have launched an investigation after a student allegedly created and circulated "inappropriate images" of other students using artificial intelligence.

It is not clear how many students are involved in the scandal, what specifically the images contained or how they were distributed.

In an email to parents on March 25, Principal Jason Allemann wrote that school leadership is "taking steps to investigate and directly address this issue with those involved, while also using this situation as a teachable moment for our students, reinforcing the importance of responsible behavior and mutual respect."

The Laguna Beach Police Department is assisting with the investigation, but a department spokesperson declined to provide any details on the probe because the individuals involved are minors.

The Orange County high school joins a growing number of educational institutions grappling with the use of artificial intelligence in the classroom and in social settings.

At schools across the country, people have used deepfake technology combined with real images of female students to create fraudulent images of nude bodies. The deepfake images can be produced using a cellphone.


Last month, five Beverly Hills eighth-graders were expelled for their involvement in the creation and sharing of fake nude pictures of their classmates. The students superimposed pictures of their classmates' faces onto simulated nude bodies generated by artificial intelligence. In total, 16 eighth-grade students were targeted by the pictures, which were shared through messaging apps, according to the district.

A 16-year-old high school student in Calabasas said a former friend used AI to generate pornographic images of her and circulated them, KABC-TV reported last month.

It's not just teens who are being targeted by AI-created images. In January, AI-generated sexually explicit images of Taylor Swift were distributed on social media. The situation prompted calls from angered fans for lawmakers to adopt legislation to protect against the creation and sharing of deepfake images.

"It is a very challenging space and the technological advancements and capabilities are occurring at a very rapid pace, which makes it all the more challenging to wrap one's head around," said Amy Mitchell, the executive director of the Center for News, Technology and Innovation, a policy research center.


swipe to next page

©2024 Los Angeles Times. Visit at Distributed by Tribune Content Agency, LLC.


blog comments powered by Disqus