Epidemic of AI Deepfakes: The Threat to Female Students
Schools are facing an epidemic of students creating deepfake pornography of their classmates. AI deepfakes have emerged as a significant threat to female students, with incidents like the one at Westfield High School in New Jersey illustrating the severity of the problem. In October of 2023, several 10th-grade girls discovered that male classmates had used AI to create explicit images of them. Despite the distress caused to the victims, school administration did very little and had to be prompted by a detective to even report the incident to law enforcement - reflecting the broader issue where many schools are unprepared to handle the rapid rise of AI-generated "deepfakes," and completely ignoring the effects of this abuse on the young victims.
Impact on Female Students
The threat of Deepfakes and synthetic media comes not only from the technology used to create it, but from people’s natural inclination to believe what they see. And the psychological toll on female students targeted by deepfakes is profound. Victims experience anxiety, depression, and a deep sense of violation. The realistic depiction of these deepfakes makes it difficult for victims to convince others that the images are fake, leading to social ostracization and long-lasting reputational damage. This can affect their academic performance, mental health, and even future opportunities, as the age-old stigma of victim-blaming may follow them for years. The incident at Westfield High School is a stark example, where the affected girls and their families reported feeling abandoned by the school administration, which did little to address the issue publicly.
Moreover, the use of deepfake technology in schools has introduced a new dimension of bullying that is particularly insidious. Unlike traditional bullying, which can be more easily identified and addressed, deepfakes are difficult to detect and even harder to remove once they spread online. This creates an environment of fear and mistrust, where female students feel vulnerable to exploitation at any time. The trauma from these experiences can leave lasting scars, as victims are forced to grapple with the emotional and social fallout.
Legal and Institutional Response
Deepfake fraud attempts increased tenfold between 2022 and 2023. And this rapid advancement of AI technology has outpaced the legal framework in many areas, leaving victims of deepfakes with very few options for recourse. And while federal laws, such as the recent warning from the FBI, make it illegal to distribute AI-generated child sexual abuse material, enforcement is challenging. Many school districts are unprepared to handle these incidents, as highlighted by the cases at Westfield High School and Issaquah High School in Washington, where administrators were “unsure” of their legal obligations.
The incident at Westfield High School also revealed the lack of a clear, consistent response from educational institutions. Despite the school’s claim that it had “conducted an investigation” and involved the police, affected families felt that the school’s actions were insufficient and slow. This reflects the broader trend where schools struggle to keep up with the challenges posed by AI technology, often lacking the policies, procedures and will necessary to effectively address such incidents.
Some districts, like Beverly Hills Unified School District, have taken a more proactive stance by expelling students involved in creating and sharing deepfakes. However, this level of response is not standard, and the lack of consistent policies across schools leaves many victim-students at risk. These varying responses among schools highlights the need for clearer guidelines and stronger measures to protect students from the harmful effects of deepfakes.
The Role of Technology Companies
Tech companies that develop and distribute AI tools also have a responsibility to prevent their misuse. While platforms like Snapchat and Instagram have policies against the distribution of explicit content, the enforcement of these rules is minimal. And although the technology to develop deepfakes is pervasive, the ability to detect and remove deepfakes is seemingly in its infancy, meaning that harmful content remains online for extended periods before being addressed, if at all.
Recent statistics reveal the growing prevalence of deepfakes. A study by Sensity AI found that the number of deepfake videos online exceeded 100,000 in early 2024, with 98% of these being non-consensual pornography. The study also highlighted the worrying trend that the majority of victims are young women. These statistics underscore the urgent need for tech companies to take more aggressive action to curb the spread of deepfakes and protect vulnerable populations, especially young women.
Lack of Legislation
Legislators play a critical role in protecting students from deepfakes. A large issue is lack of legislation that criminalizes creation and distribution of fake images and videos. By enacting laws that specifically address the creation, distribution, and possession of AI-generated explicit content. Lawmakers should ensure that these laws include provisions for severe penalties to deter offenders and provide clear guidelines for schools and law enforcement on how to handle incidents. Additionally, legislators should mandate that educational institutions implement digital literacy programs, provide resources for victim support, and require technology companies to develop and deploy tools to detect and prevent the spread of deepfakes.
Prevention and Protection
To combat this growing threat of deepfakes, a multi-faceted approach is necessary. First, there needs to be greater awareness among students, parents, and educators about the risks associated with AI technology. Schools should implement comprehensive digital literacy programs that educate students on privacy settings, the dangers of sharing personal information online, and how to report suspicious activity.
Secondly, legal reforms are essential to protect victims of deepfakes. This includes updating existing laws to specifically address the creation and distribution of AI-generated explicit images and ensuring that schools are aware of their legal responsibilities in these cases. The inconsistencies in how schools handle deepfake incidents highlight the need for standardized policies and clear legal guidelines to ensure that all students are protected.
Tech companies must also play a more active role in preventing the misuse of their AI tools. This includes developing more effective detection systems and working closely with law enforcement to identify and prosecute perpetrators. The responsibility of these companies is critical, as their platforms are often used to create and distribute deepfakes. By taking a more proactive stance, tech companies can help reduce the prevalence of deepfakes and protect vulnerable populations.
In addition, legislators should work to establish a uniform national policy that addresses the ethical use of AI in schools, ensuring that every student is protected regardless of location. This involves collaboration with technology experts to keep laws up-to-date with rapidly evolving AI capabilities and fostering public awareness campaigns to educate students and parents about the risks associated with deepfakes.
Finally, schools must take a stronger stance in protecting students. This includes implementing clear policies on digital harassment, providing support services for victims, and fostering a campus culture that condemns the misuse of technology. Beverly Vista Middle School sets the example. When administrators learned in February that eighth-grade boys the school had created explicit images of 12- and 13-year-old female classmates, they quickly sent a message — subject line: “Appalling Misuse of Artificial Intelligence” — to all district parents, staff, and middle and high school students. The message urged community members to share information with the school to help ensure that students’ “disturbing and inappropriate” use of A.I. “stops immediately.”
The threat of AI deepfakes and the damage that it poses to young female students cannot be overstated. The devastating consequences for their mental health, safety, and future opportunities are serious threats to young women. The incident at Westfield High School is a stark reminder of the dangers posed by this technology and the urgent need for action.