Laguna Beach High School investigates 'inappropriate' AI-generated images of students
Laguna Beach High School administrators have launched an investigation after a student allegedly created and circulated "inappropriate images" of other students using artificial intelligence.
It is not clear how many students are involved in the scandal, what specifically the images contained or how they were distributed.
In an email to parents on March 25, Principal Jason Allemann wrote that school leadership is "taking steps to investigate and directly address this issue with those involved, while also using this situation as a teachable moment for our students, reinforcing the importance of responsible behavior and mutual respect."
The Laguna Beach Police Department is assisting with the investigation, but a department spokesperson declined to provide any details on the probe because the individuals involved are minors.
The Orange County high school joins a growing number of educational institutions grappling with the use of artificial intelligence in the classroom and in social settings.
At schools across the country, people have used deepfake technology combined with real images of female students to create fraudulent images of nude bodies. The deepfake images can be produced using a cellphone.
Read more: Beverly Hills middle school rocked by AI-generated nude images of students
Last month, five Beverly Hills eighth-graders were expelled for their involvement in the creation and sharing of fake nude pictures of their classmates. The students superimposed pictures of their classmates' faces onto simulated nude bodies generated by artificial intelligence. In total, 16 eighth-grade students were targeted by the pictures, which were shared through messaging apps, according to the district.
A 16-year-old high school student in Calabasas said a former friend used AI to generate pornographic images of her and circulated them, KABC-TV reported last month.
It's not just teens who are being targeted by AI-created images. In January, AI-generated sexually explicit images of Taylor Swift were distributed on social media. The situation prompted calls from angered fans for lawmakers to adopt legislation to protect against the creation and sharing of deepfake images.
"It is a very challenging space and the technological advancements and capabilities are occurring at a very rapid pace, which makes it all the more challenging to wrap one's head around," said Amy Mitchell, the executive director of the Center for News, Technology and Innovation, a policy research center.
Several federal bills have been proposed, including the Preventing Deepfakes of Intimate Images Act, which would make it illegal to produce and share AI-generated sexually explicit material without the consent of the individuals being portrayed. The Disrupt Explicit Forged Images and Non-Consensual Edits, or DEFIANCE Act, which was introduced this year, would allow victims to sue the creators of the deepfakes if they knew the victim did not consent to its creation.
In California, state lawmakers have proposed extending laws prohibiting revenge porn and child porn to computer-generated images.
Read more: Scandal over AI-generated nudes at Beverly Hills middle school exposes gaps in law
School districts are also trying to get a handle on the technology. This year, the Orange County Department of Education began leading monthly meetings with school districts to talk about the use of AI and how to integrate it into the education system.
But the problem with manipulated imagery like those that circulated at Laguna Beach High School is getting worse as the technology becomes more prevalent and easier to use, according to experts.
Artificial intelligence, particularly generative AI, continues to advance faster than society can responsibly absorb it, said Cindi Howson, chief data strategy officer at the technology company ThoughtSpot.
"The world is on a learning curve for generative AI and it's moving so quickly that we cannot just leave it up to regulators, the builders of AI or the schools themselves," she said.
Parents, school districts, government and the creators of AI platforms will have to each play a role in implementing safeguards, Howson said. In the meantime, she suggests parents monitor which apps their children are using and have conversations with them about how this technology can be used and abused.
Artificial intelligence technology paired with the widespread use of social media among teens who might not fully understand the consequences seems like an intractable problem, said Sheri Morgan, a Laguna Beach resident whose daughter attends Laguna Beach High School.
"The social media that's out there today, I think, further emphasizes this false sense of what you need, what you want, how you should look, and how you should be perceived by people," she said. "We talk to our kids a lot about the impacts of technology and social media and getting lost in the distraction of it, but it's a challenge."
In Laguna Beach, district officials have not detailed the possible disciplinary options being considered by administrators. The district in a statement said that each incident "is handled on a case-by-case basis considering the individual circumstances of the situation."
The high school, which has more than 1,000 students enrolled, plans to host panel discussions on AI-generated content for students during the school day. The panel will include the school resource officer, counselors, psychologists and digital media and library specialists, Allemann wrote in a follow-up email to parents on Friday.
"In our small community, these incidents can have a far-reaching impact on our campus culture," Allemann wrote. "These actions not only compromise individual dignity but also undermine the positive and supportive environment we aim to foster at LBHS."
This story originally appeared in Los Angeles Times.