AI algorithms are objectifying women's bodies
AI algorithms are objectifying women’s bodies.
So The Guardian posted this article here where they discuss AI tools that are rating pictures of women as more sexually suggestive than those of men, and it’s even tagging images of women in everyday situations as being sexually suggestive.
So these AI tools are being developed by big tech companies like Microsoft and Google, and their goal is essentially to pick up on violent or pornographic content and block it on social media before users see it. Two Guardian journalists then used these tools to analyse a whole bunch of photos of men and women in underwear, or working out, or with partial nudity, and found that pictures of women were rated more racy, or sexually suggestive, than similar pictures of men.
AI also looked at a picture of a pregnant belly and rated it with 90% confidence that it was sexually suggestive.
So they did some more experiments because they wanted to see what exactly the AI was analyzing in the photos that was causing it to view an image as sexually suggestive. In another video shown in the article, they had a man stand in long pants with a bare chest, where he got a raciness score of less than 22%. But when he put a bra on, the score jumped up to 97%. So they realised that the AI is tagging the bra as being inherently suggestive, rather than recognising it as an item that many women wear every day as an item of clothing, the same way they would wear pants or a shirt.
Now the next question is obviously why is the AI doing this, where did it learn this from? So basically the way that AI is developed is by using machine learning, where the computers learn from data. The computers aren’t given rules telling them how to do a task, instead they get this thing called training data. So in this context, people are hired to label a whole bunch of images and then the computers analyse the scores given by the humans, and find a pattern so that they can then replicate these human decisions.
Now the thinking here is that the people who labelled those first photos might have been straight men, who associate men working out with fitness, but might look at a similar image of a woman and think that it's suggestive. Whether or not that is a conscious or unconscious bias is probably another conversation in itself.
So what this means is that when the AI is choosing which images need to be suppressed, or even shadowbanned, it is doing so with this built-in gender bias. Now this has widespread effects throughout social media but one of the most notable is for people who rely on social media for work. Particularly, as they note in the article, people who have chronic illness or disability can often rely on social media as their form of income, and shadowbanning can harm their business.
People can be locked out of their accounts, have their content removed from the explore page, or not show up under relevant hashtags.
The whole article is a really great read, we’d really recommend checking it out here.
Make sure to subscribe to our newsletter if you want to read more blog posts like this one.