A Deepfake Porn Bot Is Being Used to Abuse Thousands of Women

An AI tool that “removes” items of clothing from photos has targeted more than 100,000 women, some of whom appear to be under the age of 18.
a collage of woman's face
Photograph: Vlaaka Kocvarová/Getty Images

Pornographic deepfakes are being weaponized at an alarming scale with at least 104,000 women targeted by a bot operating on the messaging app Telegram since July. The bot is used by thousands of people every month who use it to create nude images of friends and family members, some of whom appear to be under the age of 18.

The still images of nude women are generated by an AI that "removes" items of clothing from a non-nude photo. Every day the bot sends out a gallery of new images to an associated Telegram channel which has almost 25,000 subscribers. The sets of images are frequently viewed more 3,000 times. A separate Telegram channel that promotes the bot has more than 50,000 subscribers.

Some of the images produced by the bot are glitchy, but many could pass for genuine. “It is maybe the first time that we are seeing these at a massive scale,” says Giorgio Patrini, CEO and chief scientist at deepfake detection company Sensity, which conducted the research. The company is publicizing its findings in a bid to pressure services hosting the content to remove it, but it is not publicly naming the Telegram channels involved.

The actual number of women targeted by the deepfake bot is likely much higher than 104,000. Sensity was only able to count images shared publicly, and the bot gives people the option to generate photos privately. “Most of the interest for the attack is on private individuals,” Patrini says. “The very large majority of those are for people that we cannot even recognize.”

As a result, it is likely very few of the women who have been targeted know that the images exist. The bot and a number of Telegram channels linked to it are primarily Russian-language but also offer English-language translations. In a number of cases, the images created appear to contain girls who are under the age of 18, Sensity adds, saying it has no way to verify this but has informed law enforcement of their existence.

Unlike other nonconsensual explicit deepfake videos, which have racked up millions of views on porn websites, these images require no technical knowledge to create. The process is automated and can be used by anyone—it’s as simple as uploading an image to any messaging service.

The images are automatically created once people upload a clothed image of the victim to the Telegram bot from their phone or desktop. Sensity’s analysis says the technology only works on images of women. The bot is free to use, although it limits people to 10 images per day, and payments have to be made to remove watermarks from images. A premium version costs around $8 for 112 images, Sensity says.

“It's a depressing validation of all the fears that those of us who had heard about this technology brought up at the beginning,” says Mary Anne Franks, a professor of law at the University of Miami. Franks provided some feedback on the Sensity research before it was published but was not involved in the report’s final findings. “Now you've got the even more terrifying reality that it doesn't matter if you've never posed for a photo naked or never shared any kind of intimate data with someone, all they need is a picture of your face.”

It’s believed that the Telegram bot is powered by a version of the DeepNude software. Vice first reported on DeepNude in June 2019. The original creator killed the app citing fears about how it could be used, but not before it reached 95,000 downloads in just a few days.

The code was quickly backed up and copied. The DeepNude software uses deep learning and generative adversarial networks to generate what it thinks victims bodies look like. The AI is trained on a set of images of clothed and naked women and is able to synthesize body parts in final images.

“This is now something that a community has embedded into a messaging platform app, and therefore they have pushed forward the usability and the ease to access this type of technology,” Patrini says. The Telegram bot is powered by external servers, Sensity says, meaning it lowers the barrier of entry. “In a way, it is literally deepfakes as a service.”

Telegram did not answer questions about the bot and the abusive images it produces. Sensity’s report also says the company did not respond when it reported the bot and channels several months ago. The company has a limited set of terms of service. One of its three bullet points says that people should not “post illegal pornographic content on publicly viewable Telegram channels, bots, etc.”

In an expanded set of frequently asked questions, Telegram says it does process requests to take down “illegal public content.” It adds that Telegram chats and group chats are private, and the company doesn’t process requests related to them; however, channels and bots are publicly available. A section on takedowns says “we can take down porn bots.”

Before the publication of this article, the Telegram channel that pushed out daily galleries of bot-generated deepfake images saw all of the messages within it removed. It is not clear who these were removed by.

For this sort of activity there is usually some data on who has used the bot and their intentions. Within the Telegram channels linked to the bot, there is a detailed “privacy policy,” and people using the service have answered self-selecting surveys about their behavior.

An anonymous poll posted to the Telegram channel in July 2019 was answered by more than 7,200 people, of which 70 percent said they were from “Russia, Ukraine, Belarus, Kazakhstan, and the entire former USSR.” All other regions of the world had less than 6 percent of the poll share each. People using the bot also self-reported finding it from Russian social media network VK. Sensity’s report says that it has found a large amount of deepfake content on the social network, and the bot also has a dedicated page on the site. A spokesperson for VK says it "doesn't tolerate such behavior on the platform" and has "permanently blocked this community."

A separate July 2019 poll answered by 3,300 people revealed people’s motivations for using the bot. It asked, “Who are you interested to undress in the first place?” The overwhelming majority of respondents, 63 percent, selected the option “Familiar girls, whom I know in real life.” Celebrities and “stars” was the second-most selected category (16 per cent), “models and beauties from Instagram” was the third-most selected option with eight percent.

Experts fear these type of images will be used to humiliate and blackmail women. But as deepfake technology has been rapidly scaled, the law has failed to keep up and has mostly focused on the future political impact of the technology.

Since deepfakes were invented at the end of 2017, they have mostly been used to abuse women. Growth over the past year has been exponential, as the technology required to make them becomes cheaper and easier to use. In July 2019 there were 14,678 deepfake videos online, a previous Sensity research found. By June this year the number climbed to 49,081. Almost all of these videos were pornographic in nature and targeted women.

In August, WIRED reported on how deepfake porn videos had gone mainstream. More than 1,000 abusive videos were being uploaded to the world’s biggest porn websites every month. One 30-second video that uses actress Emma Watson’s face and is hosted on XVideos and Xnxx, both are owned by the same company, has been watched more than 30 million times. The company did not respond to requests for comment at the time, while xHamster scrubbed tens of deepfake videos with millions of views from its site after WIRED highlighted the videos.

The breakthrough of the bot-generated images pushes deepfake abuse into dangerous new territory. The Telegram bot is not the only example of the underlying DeepNude system being used in the wild. The Google Play store has apps that pixelate areas of photos that make the subjects look like they are not wearing any clothes or say they use x-rays to see through people’s clothes. Other websites charge people for access to deepfake technology—one even offers a $20 per month subscription service.

Sensity believes the person behind the Telegram bot is likely to be based in Russia. When messaged about the bot the creator downplayed the impact the technology was having. “Everything is freely available. If you decide to use the entertainment app for selfish purposes, then you are responsible for this, you must give an account of your actions,” the anonymous creator claimed. The bot’s privacy policy claims images produced using it are “a fake parody that shall not be confused as a real [sic] or related to any person,” and images are produced “for the purpose of fun.”

Multiple versions of the DeepNude code are publicly available on the Microsoft-owned software platform GitHub. At the time of writing, two of the DeepNude code sets had been updated within the last week. The documents say that technically minded individuals can utilize the code without watermarks and offer demos on how best to operate the system.

The open-source versions of the DeepNude code on GitHub exist despite the organization previously removing them. After the DeepNude app first appeared in July 2019, GitHub said it violated its “acceptable use policy” and removed some of the files. When asked about the DeepNude files still on its platform, a GitHub spokesperson said it does not moderate user-uploaded content unless it receives complaints. “We do not condone using GitHub for posting sexually obscene content and prohibit such conduct in our Terms of Service and Acceptable Use Policies,” a company spokesperson says. At the time of writing, all of the versions of the open-source code were still accessible on the site.

The very existence of this technology, and the lack of action to stop its spread by technology companies, could cause long-term harm for women and girls, Franks says. “It will definitely inhibit what you’re going to say, what you’re willing to do, the risks you’re able to take, the kinds of jobs that you’re going to apply for. These are all ways in which we are imposing a silencing effect on women and girls.”

This story originally appeared on WIRED UK.


More Great WIRED Stories