There are myriad issues surrounding deepfakes, but by far, the most pressing is the use of this technology to create non-consensual pornographic materials. It is inherently misogynistic: a comprehensive report into deepfakes in 2023 determined that deepfake pornography constitutes 98% of all deepfake videos found online. Worse still, 99% of those targeted by deepfake pornography are women.
“Deepfake sexual abuse is commonly about trying to silence women who speak out,” says Clare McGlynn, a leading authority on deepfake laws and Professor of Law at Durham University in the UK. “We see this with Taylor Swift. We see this with women politicians, where deepfake porn is an attempt to intimidate them. We see it with many women in the public eye.”
Amanda Manyame, Equality Now’s Digital Rights Advisor, who works at the intersection between tech and the law, agrees that women in the public eye are at a particular risk of deepfake abuse. She tells Glamour, “Anyone can be a victim of deepfake imaged-based sexual abuse, but women in the public eye and positions of authority—such as celebrities, politicians, journalists, and human rights defenders—are particularly targeted.”
But seeing high-profile women victimized in this way also has a profound impact on regular women and girls. When Ellie Wilson, an advocate for justice reform, tweeted about the troubling response to the deepfakes of Swift, she was met with her own flurry of online abuse.
“People threatened to make similar deepfake images of me,” she tells Glamour. “These attacks for merely stating my opinion highlight just how dangerous it is for women to simply exist on the internet.”
Olivia DeRamus, the founder and CEO of Communia, a social network created by and for women, notes that even speaking up against deepfaking puts other women in danger. “Just talking about [deepfaking] as a woman paints a target on my back, along with other advocates, the female journalists covering this, the #swifties speaking out, and even female politicians who want to tackle the issue.”
Professor McGlynn emphasises that deepfaking represents a threat to all women and girls, citing the “potentially devastating impact on our private and professional lives.”
It’s clear that deepfake technology is rapidly hurtling out of control. Amanda Manyame cites “rapid advances in technology and connectivity” that make it “increasingly easy and cheap to create abusive deepfake content.” She adds, “Cyberspace facilitates abuse because a perpetrator doesn’t have to be in close physical proximity to a victim.
“In addition, the anonymity provided by the internet creates the perfect environment for perpetrators to cause harm while remaining anonymous and difficult to track down.”
Moreover, most countries are ill-equipped to deal with tech-facilitated harms like deepfaked image-based abuse. In the UK, for example, it is an offense—under the Online Safety Act—to share deepfake pornographic content without consent, but it fails to cover the creation of such images.
“This gap,” Manyame explains, “has created an enabling environment for perpetrators who know they are unlikely to be discovered or punished. The situation is worsened by the lack of legal accountability governing the tech sector, which currently does not have to ensure safety by design at the coding or creation stage.”
Read the full article here