Ultra sexualised images of Billie Eilish likened to “deepfake pornography” went viral on TikTok and were amplified by the app’s algorithm.
A photo gallery video featuring the singer’s face on sexually exaggerated bodies was seen by 11 million people in four days before being removed for violating TikTok’s community guidelines around sexual harassment.
Tracy Harwood, professor of digital culture at the UK’s De Montfort University, told VICE World News it looked like the images had been created using some form of AI imaging or AI art generator app.
The video appeared on the app's For You page, despite TikTok's own guidelines saying that overtly sexualised content is ineligible.
In the last few weeks, apps like Lensa or Midjourney have become hugely popular, with Instagram, Twitter and TikTok full of people’s digital avatars based on their own selfies.
But many women have said that the apps are perpetuating misogyny by removing their clothes or presenting them with sexualised bodies.
Megan Fox is one amongst several public figures who have recently criticised Lensa for sexualising self-generated AI images.
Eilish, who has spoken about the “bruising experience” of growing up in the public eye, said in a recent interview with the BBC that one of her songs, “Your Power,” is not only about “the man who abused his power when he was with me, how much trauma he has caused me, physically and emotionally” but about several people she has met.
In 2019, she revealed that one of the reasons behind her trademark baggy clothes is that “Nobody can have an opinion because they haven't seen what’s underneath. Nobody can be like, ‘she’s slim-thick,’ ‘she’s not slim-thick,’ ‘she’s got a flat ass,’ ‘she's got a fat ass.’ No one can say any of that because they don't know.”
Hera Hussain, founder and CEO of Chayn, which supports victims of gender-based violence on and offline, said: “The TikTok algorithm isn't set up to discern between what kind of content is getting engagement and this is a critical issue.
“It should not recommend these videos to the feeds of their users. It's not a harmless meme. It's not a still from the singer's own videos. It's a hyper-sexualised AI-manipulated image."
The TikTok account which posted the images had over 76,000 followers and linked to an Instagram account with just over 2,000 followers that has also shared the same images. In its Instagram stories it also posted manipulated images of Eilish and said “$20 uncensored images. Inbox”, inviting users to pay for more explicit content.
Andrea Simon, director of the End Violence Against Women Coalition, said: “Social media platforms have increasingly been making claims that they are committed to tackling harmful content including misogyny and violence against women. We know that content which violates TikTok’s terms and conditions often remains visible and even widely amplified by its algorithms across the platform, which is incredibly alarming given that its audience is primarily children and young people.”
“By taking no action on this type of harmful content, including a lack of clear ways to report it or appropriate consequences for those that are abusive, TikTok is facilitating and ultimately profiting from the abuse of women and girls as well as the potential radicalisation of its young male users into harmful attitudes and beliefs about women and girls and sex and relationships.”
Harwood, digital culture expert, told VICE World News that there was strong evidence these images had been created using artificial intelligence. “The closer you look at the images, the more you can see problems with them – like the outfit not really being the same on both sides of the image, and edges not clearly defined; the skin finish looking plastic in places.”
In November the UK government said that it planned to make the sharing of non-consensual pornographic deepfakes illegal in a new piece of legislation being drafted to combat internet harms called the Online Safety Bill.
Trang Lee, a doctoral researcher in the Australian Research Council Centre of Excellence in Automated Decision Making and Society at Monash University, said TikTok “definitely have ethical, social, and human rights responsibilities to their users and those affected by their platforms” but that spotted deepfakes can be challenging, and that current efforts from platforms still had low accuracy levels in detecting AI.
“We definitely need to develop regulations, increase public awareness, and deploy mechanisms such as screening, reporting, and removal policies to reduce or limit the spread of harmful deep fakes.”
A TikTok spokesperson said: "This content violates our Community Guidelines, which clearly states that we do not allow content which alters or morphs an image of another individual to portray or imply sexual suggestiveness."
By signing up, you agree to the Terms of Use and Privacy Policy & to receive electronic communications from Vice Media Group, which may include marketing promotions, advertisements and sponsored content.