With Artificial Intelligence enabling the creation of unrealistic, idealised images of women, what harm might that cause? And how might we protect ourselves – and the next generation – from AI-generated images?
Last week, two things made me mad. Fuming, in fact.
As you’ve likely heard, sexually explicit images of Taylor Swift started spreading on social media, primarily on X (formerly known as Twitter). Somehow, they got through the filters that prevent explicit and violent content being shared. Millions of people saw the images before they were removed, and X temporarily blocked searches for ‘Taylor Swift’.
This was a ‘deepfake’. A deepfake is when someone uses artificial intelligence (AI) to manipulate an image, video or audio of another person’s in order to falsely portray them doing or saying things they didn’t say or do. And, in Taylor’s case, this also involved things she was wearing or not wearing. Trust me: if you don’t already know all the details, you’d rather not know.
The perpetrator of this assault – because that’s absolutely what this was – cruelly altered images of Taylor so ‘well’ that people believed they were legit. Shame on the ‘creator’.
As the New York Times stated in a story about what happened to Swift, “as the A.I. industry has boomed, companies have raced to release tools that enable users to create images, videos, text and audio recordings with simple prompts. The A.I. tools are wildly popular but have made it easier and cheaper than ever to create so-called ‘deepfakes’.” And a survey shows that deepfakes are primarily weaponised against women.
This reeks of misogyny.
The other thing that made me swear was what happened to Australian politician Georgie Purcell, 31. She posted on her social-media accounts to tell her followers that Nine News had, without asking permission, altered a photo of her. Her dress became a crop top and skirt, and her breasts looked bigger.
She’d already had a rough working day as a politician. “I endured a lot yesterday, but having my body and outfit photoshopped by the media wasn’t on my bingo card,” Georgie wrote on social media. “Note the enlarged boobs and outfit to be made more revealing. Can’t imagine this happening to a male MP.”
“Unfortunately,” she told Guardian Australia, “the difference for women [in politics and more generally] is that they also have to deal with the constant sexualisation and objectification that comes with having images leaked, distorted and AI generated.”
Nine News blamed an “automation by Photoshop”. But a spokesperson for Adobe said that use of its generative-AI features would have required “human intervention and approval”.
Georgie said: “I just hope that all media outlets can learn from this and the emerging risks that come with this technology”.
The Scary Rise of AI-Generated Images
The question is: how can we trust that images we see aren’t AI-generated? As the BBC reported, a study found there had been a 550% rise in the creation of doctored images between 2019 and 2023, “fuelled by the emergence of AI”.
The last 15 months have been a gamechanger. Large language models – primarily OpenAI’s ChatGPT, followed by Google’s Bard – emerged to great fanfare. When asked a question by a human, these ‘chatbots’ generate ‘human-like’ answers of sometimes dubious merit, based on massive data sets (they’ve effectively read the internet).
It’s not just text being coughed up, either, as generative models trained on images and text can themselves create new images. These ‘generative AI’ or ‘GenAI’ programs, including DALL-E 2, Stable Diffusion, Midjourney, and Snapchat’s ‘My AI buddy’, can create ‘photo-realistic’ images if someone feeds it a written prompt.
Increasingly, AI-generated images depicting the bodies and faces of ‘idealised’ women are being used instead of images of real women: in the media, on social media, in advertising (e.g. billboards), and by businesses (e.g. in marketing materials), just for starters.
A story in U.K. publication The Standard by tech journalist Mary-Ann Russon says “there’s a huge momentum towards investing in generative AI at the moment globally. In the fashion industry in particular, big brands are increasingly making headlines announcing trials of AI-powered technology.” Her story features a photo of two stunningly beautiful, AI-generated Levi’s ‘models’ – essentially, digital avatars.
Last year, a TikTok video featuring AI-generated images of South-Asian and East-Asian women (posted by an account named ‘AI World Beauties’) notched 1.7 million views. Commenting on this for ABC News, Shibani Antonette, a lecturer in data science, said “most of [these] generated images perpetuate colourism and cultural beauty standards”. Think light-ish skin, thin noses, full lips, and high cheekbones.
The Effect On The Beauty Standard
In an article for My Modern Met, Jessica Stewart (a digital-media specialist, curator and art historian) writes: “The beauty standard presented in media is always a source of attention, as it has a large effect on how we view ourselves. With the rise of AI, The Bulimia Project [a U.S. website that publishes research related to eating disorders] wanted to see what type of ‘ideal body’ popular image generators would produce.” Those chosen: DALL-E 2, Stable Diffusion, and Midjourney.
As Jessica writes, “40% of the AI-generated images they produced depicted an unrealistic body type. When prompted to create ‘the perfect female body according to social media in 2023’, all three image generators created women with small bodies. With their tiny waists, chiseled abs, and large breasts, the ‘ideal’ woman created by Midjourney was signaled as the furthest from reality.”
FFS. How can we tell what is reality anymore?
Because, unfortunately, this phenomenon perpetuates itself online. Algorithms promote images of what are considered to be ‘beautiful’ people. And as more and more AI-generated images of such people become more prevalent, the more AI algorithms will lead people to them.
What About ‘Fun’ AI-Generated Images?
Something that’ proved popular is the app Lensa, which enables you to edit your photos with image-enhancing, AI-powered tools and filters to create your own ‘AI selfies’.
In November 2022, Lensa’s new ‘Magic Avatar’ app rocketed to the top of App Store charts. “Magic Avatar,” Lensa says, “is an app that turns a picture of you into a Disney princess, a NASA astronaut, a Pokemon trainer, or anything you can imagine. Simply upload a profile picture and a powerful AI algorithm will generate four variations based on the description you provide”. Imagine portraits of you made by a professional digital artist (who is trying to flatter you a little).
Is this just a bit of fun? Well, journalist Olivia Snow has written a story for wired.com with the disturbing title ‘The Magic Avatar App Generated Nudes From My Childhood Photos’ (yes, it literally did that). “Lensa’s terms of service,” Olivia writes, “instruct users to submit only appropriate content including ‘no nudes’. Yet, there are sinister violations inherent in the app, namely the algorithmic tendency to sexualize subjects to a degree that is not only uncomfortable but also potentially dangerous.”
“Many users – primarily women – have noticed that even when they upload modest photos, the app not only generates nudes but also ascribes cartoonishly sexualized features, like sultry poses and gigantic breasts, to their images.”
Possible Impact of AI-Generated Images on Eating Disorders
As women, it’s already hard enough not to compare our bodies and faces with real photos of models and celebrities! And digitally edited photographs have been correlated with body-image issues amongst young women. Throw in AI-generated photos of women with ‘idealised’ beauty, and what do we get? There’s no doubt whatsoever that some women – particularly young women – will feel insecure about their looks compared to the ‘women’ in AI-generated images. Might they go to great lengths to look ‘perfect’?
Andcould this lead not just to body-image woes but even enable eating disorders? Well, astory by Washington Post technology columnist Geoffrey A. Fowler is titled “AI is acting ‘pro-anorexia’ and tech companies aren’t stopping it”. The article reads: “Disturbing fake images and dangerous chatbot advice: new research shows how ChatGPT, Bard, Stable Diffusion and more could fuel one of the most deadly mental illnesses”.
Fowler asked their chatbots for advice on eating-disorder practices – and got detailed, enabling answers. “Then I started asking AIs for pictures. I typed ‘thinspo’ — a catchphrase for thin inspiration — into Stable Diffusion [which cangenerate images from a text description]. It produced fake photos of women with thighs not much wider than wrists.” He also cites a large study that produced similar findings.
“This is disgusting and should anger any parent, doctor or friend of someone with an eating disorder,” Fowler writes. “There’s a reason it happened: AI has learned some deeply unhealthy ideas about body image and eating by scouring the internet. And some of the best-funded tech companies in the world aren’t stopping it from happening.”
AI Is A Mirror, Not A Monster
We shouldn’t see AI as a predator that has emerged out of nowhere. Massey University lecturer Dr Kevin Beale, part of Massey’s Digital Culture Laboratory, tells Capsule that generative-AI tools “don’t invent anything new: they parrot what they’ve been fed, often from stolen material. That means that they don’t invent new problems – rather, they magnify existing problems within our culture. So yes, it’s important that we understand the negative impact of unrealistic, idealised, altered bodies. But, also, any concerns about generative AI making the situation worse may be a distraction. The core problem, that is hurting people, is a culture that means unrealistic, idealised bodies are seen as desirable in ways that can be tied to advertising or other kinds of social influence.”
Veale adds that generative AI will likely “make producing unrealistic idealised [images of] bodies more efficient and reduce costs by lowering the number of humans involved. It may lead to an increase in volume [of these images] due to lower costs. At minimum, it means less human oversight. So that’s fewer people who might say ‘we’ve gone too far, fix it’.”
So… What Can We Do?
We can’t put the AI genie back in the bottle. But we need to critically consider the issue now, rather than three or four years down the track. Because the more something happens, the more normal it feels.
Something we can do is think carefully about what we post on social media and look out for products that help safeguard our images. As an MIT Technology Review article says, “remember that selfie you posted last week? There’s currently nothing stopping someone taking it and editing it using powerful generative AI systems. Even worse, thanks to the sophistication of these systems, it might be impossible to prove that the resulting image is fake.”
But MIT researchers have created a tool, called PhotoGuard, that “works like a protective shield by altering photos in tiny ways that are invisible to the human eye but prevent them from being manipulated. If someone tries to use an editing app based on a generative AI model such as Stable Diffusion to manipulate an image that has been ‘immunized’ by PhotoGuard, the result will look unrealistic or warped. This tool could, for example, help prevent women’s selfies from being made into nonconsensual deepfake pornography.” So look out for tools like these.
When it comes to seeing AI-generated images day to day, what can we do to protect ourselves, and our children from harm? It’s about awareness, avoidance, monitoring, and contributing to a conversation about this issue.
- Remind yourself that some of the images you see in the media, or on social media, or advertising, etc may be AI-generated
- Consider not using any tools that provide AI-generated images – and have discussions with friends and family members about why you’ve made that choice
- If you are using generative-AI tools, consider how you could do it carefully and sparingly
- Ask your children to avoid generative-AI tools, or otherwise teach them how to use them carefully and sparingly
- Talk to your children about how AI-generated images aren’t real and can be damaging
And remember, when it comes to depicting women’s bodies, AI is a lie. Don’t believe it.