top of page

Everyone is a target: Inside Deepfake Pornography

  • Salient Mag
  • 4 days ago
  • 3 min read

By PHOEBE ROBERTSON


Deepfake pornography has been called the future of revenge porn, but that term doesn’t quite capture it. Unlike traditional image-based abuse, it doesn’t require an explicit photo to exist in the first place. The only thing needed is a face. And in an era where students document their lives in digital spaces, that means anyone with a social media account is a potential target.


The risk is higher than most people realise. In 2024, South Korea uncovered entire Telegram networks where male university students were generating and sharing deepfake pornography of their female classmates. Not crude Photoshop jobs, but hyper-realistic videos—sophisticated enough to mimic facial expressions, body movement, even speech. These weren’t public figures or celebrities. They were students. Ordinary women whose only mistake was existing in a digital world.


And yet, this crisis barely registered in New Zealand. A few headlines, a passing mention in international reports, but no real conversation. No outcry. No urgency. If it wasn’t happening here, it wasn’t happening—except that it was. New Zealand schools have already reported cases of AI-generated explicit images being circulated among students. At the university level, the silence is deafening.


Unlike revenge porn, which relies on intimate images that were once real, deepfake pornography makes anyone a potential victim. A university student shouldn’t have to worry that a class photo, a LinkedIn headshot, or a candid Instagram post could be weaponized against them. But that’s exactly where we are. Victims describe the same symptoms of PTSD as survivors of physical assault—paranoia, anxiety, withdrawal. 

And still, under New Zealand law, deepfake pornography is not explicitly criminalised. The Harmful Digital Communications Act (HDCA) 2015 makes it illegal to share an intimate image without consent—but does a deepfake count as ‘intimate’ if it was never real? The law also requires proof that the offender intended to cause harm. What if they claim it was a joke? What if they argue it’s obviously fake?


That legal grey area leaves victims with little recourse. Unless a deepfake meets the criteria for defamation, harassment, or child exploitation, there is no clear pathway for justice. The burden falls entirely on the victim to prove their harm, to convince authorities that this isn’t just an internet prank—that it is, in fact, a violation.

Technology moves faster than policy. AI detection tools exist but remain flawed. Platforms like Telegram, after international pressure, have begun cooperating with authorities to remove explicit content—but that’s reactive, not preventative. By the time a victim becomes aware of a deepfake, the damage is already done.


Other countries have begun addressing the issue. In the United Kingdom, the Online Safety Act 2023 criminalises the sharing of intimate images that show or "appear to show" an individual without consent explicitly including deepfakes as part of image-based abuse offences. South Korea has gone further, criminalising not just the creation but the possession and consumption of deepfake pornography, with penalties of up to three years in prison or fines of 30 million won (approximately $36,000).


These examples highlight a critical point: New Zealand’s current legal framework is outdated. The HDCA was not designed with AI-generated content in mind. Its provisions fail to address the complexities of deepfakes, leaving victims vulnerable and perpetrators unaccountable.


The need for comprehensive law reform is urgent. Legislation must explicitly criminalise the creation and distribution of non-consensual deepfake pornography, recognising it as a form of image-based sexual abuse. The burden of proof should shift away from victims, acknowledging the inherent harm and violation of autonomy that deepfakes represent.


Beyond criminal penalties, victims need accessible avenues for redress, including injunctions and damages. Universities must also take responsibility, implementing clear policies on deepfakes, establishing support systems, and ensuring swift removal processes to mitigate reputational harm.


The rapid advancement of AI technology demands a proactive response. Without explicit laws addressing deepfake pornography, New Zealand risks becoming a haven for digital exploitation, leaving its citizens unprotected.


The question isn’t whether deepfake pornography will become a crisis in New Zealand universities. The question is when. And when it does, will we still be pretending it’s someone else’s problem?


Recent Posts

See All
  • Facebook
  • Twitter
  • Spotify
  • Instagram
bottom of page