AI-generated porn scandal rocks University of Hong Kong after law student allegedly created deepfakes of 20 women

AI-generated porn scandal rocks University of Hong Kong after law student allegedly created deepfakes of 20 women

A scandal involving AI-generated pornography has erupted at the University of Hong Kong (HKU), shaking the campus community and raising serious ethical questions about the misuse of artificial intelligence.

A law student has been accused of creating and distributing deepfake pornography featuring at least 20 female students and faculty members without their consent.

The alleged perpetrator, whose identity has not been publicly released pending investigations, reportedly used sophisticated AI technology to convincingly superimpose the faces of these women onto existing pornographic videos.

The incident has sparked outrage and fear among the university's female population.

Many have expressed feeling violated and deeply disturbed by the breach of their privacy and the potential for lasting reputational damage.

The university's administration has condemned the actions, promising a thorough investigation and emphasizing its commitment to protecting the safety and well-being of its students.

The HKU Student Union has also issued a statement denouncing the act and calling for stronger measures to prevent similar occurrences.

Beyond the immediate impact on the victims, the scandal highlights the urgent need for regulations surrounding AI-generated content and its potential for misuse.

The ease with which deepfakes can be created and distributed poses a significant threat, not only to individual privacy but also to social stability and trust.

The incident raises questions about the legal recourse available to victims and the effectiveness of existing laws in addressing this emerging form of online abuse.

The university's response is being closely scrutinized.

Critics are questioning whether the institution's existing policies and procedures are sufficient to deal with such sophisticated forms of online harassment.

Calls for enhanced digital literacy training, improved reporting mechanisms, and stricter penalties for perpetrators are growing louder.

The HKU case serves as a stark warning of the potential for AI technology to be weaponized for malicious purposes, urging a wider societal conversation about ethical AI development and the need for robust legal frameworks to combat the misuse of this powerful technology.

The long-term implications of this scandal, both for the victims and for the future of AI ethics, are likely to be profound.

Hong Kong's privacy watchdog said Tuesday it has launched a criminal investigation into an -generated porn scandal at the city's oldest university, after a student was accused of creating lewd images of his female classmates and teachers.

Three people alleged over the weekend that a University of Hong Kong (HKU) law student fabricated pornographic of at least 20 women using artificial intelligence, in what is the first high-profile case of its kind in the Chinese financial hub.

The university sparked outrage over a perceived lenient punishment after it said Saturday it had only sent a warning letter to the student and demanded he apologize.

But Hong Kong's Office of the Privacy Commissioner for Personal Data said Tuesday that disclosing someone else's personal data without consent, and with an intent to cause harm, could be an offense.

The watchdog "has begun a criminal investigation into the incident and has no further comment at this stage," it said, without mentioning the student.

The accusers said in a statement Saturday that Hong Kong law only criminalises the distribution of "intimate images," including those created with AI, but not the generation of them.

There is no allegation so far that the student spread the deepfake images, and so "victims are unable to seek punishment... through Hong Kong's criminal justice system", they wrote.

The accusers said a friend discovered the images on the student's laptop.

Experts warn the alleged use of AI in the scandal may be the tip of a "very large iceberg" surrounding non-consensual imagery.

"The HKU case shows clearly that anyone could be a perpetrator, no space is 100 percent safe," Annie Chan, a former associate professor at Hong Kong's Lingnan University, told AFP.

Women's rights advocates said Hong Kong was "lagging behind" in terms of legal protections.

"Some people who seek our help feel wronged, because they never took those photos," said Doris Chong, executive director at the Association Concerning Sexual Violence Against Women, referring to cases at the group's crisis center. "The AI generations are so life-like that their circulation would be very upsetting."

Asked about the case at a Tuesday press briefing, Hong Kong leader John Lee said most of the city's laws "are applicable to activities on the internet."

HKU said on Saturday it will review the case and take further action if appropriate.

AI-generated pornography has also made headlines in the U.S.  A 6% of American teens have been targets of nude deepfake images that look like them.

Last month, Meta removed a number of ads promoting "nudify" apps — AI  using images of real people — after a found hundreds of such advertisements on its platforms.

In May, one of the largest websites dedicated to deepfake pornography announced that it after a critical service provider withdrew its support, effectively halting the site's operations.