deepfake Archives - Women's Agenda https://womensagenda.com.au/tag/deepfake/ News for professional women and female entrepreneurs Mon, 12 Feb 2024 00:04:28 +0000 en-AU hourly 1 https://wordpress.org/?v=6.4.2 Elon Musk made a meme about the sexual exploitation of women’s bodies online. So I made some memes about him. https://womensagenda.com.au/latest/soapbox/elon-musk-made-a-meme-about-the-sexual-exploitation-of-womens-bodies-online-so-i-made-some-memes-about-him/ https://womensagenda.com.au/latest/soapbox/elon-musk-made-a-meme-about-the-sexual-exploitation-of-womens-bodies-online-so-i-made-some-memes-about-him/#respond Mon, 12 Feb 2024 00:04:26 +0000 https://womensagenda.com.au/?p=74853 At the frontline of global technological development is a man who treats generative AI as a game, one that is played at the expense of women’s bodies.

The post Elon Musk made a meme about the sexual exploitation of women’s bodies online. So I made some memes about him. appeared first on Women's Agenda.

]]>
At the frontline of global technological development is a man who treats generative AI as a game, one that is played at the expense of women’s bodies, with hundreds of millions of spectators watching on.

Just weeks after sexually-explicit AI-generated images of Taylor Swift were circulated and viewed on his social media platform X more than 47 million times, Elon Musk trivialised what is a very real threat for women when it comes to these technologies.

On Sunday night (Monday morning in Australia), Musk posted a meme that shows just how little he cares about this threat.

“Boobs rock, it’s a fact,” he wrote on the post.

Elon Musk’s post shows how men (still) disrespect women’s bodies – online and beyond.

It’s the billionaire, Silicon Valley bro version of an 11-year-old boy typing “5318008” on his calculator and turning it upside down so it spells “BOOBIES”. I can picture an adolescent-like giggle escaping from Musk as he put fake boobs on the woman in the meme, wrote the caption and posted it to his 172 million followers on X, his very own platform.

But the truth is – the meme was immature, tone deaf and the clearest indication we have that Musk just isn’t funny.

Luckily, I am. And I want to show him how meme-making is really done.

The AI blame game

If there was any real accountability for Musk and people wanted to bring him down for making fun of a very real issue, I wonder if he would point the finger at AI.

Because that’s the pattern we’re seeing. When the pornographic images of Taylor Swift were distributed all over the platform, people vaguely blamed it on technology. No humans were accountable.

When Victorian MP Georgie Purcell’s body was edited and aired on a national television news broadcast, Nine News director Hugh Nailon cited AI as the reason the image was altered. No humans were accountable.

I’m tired of the AI blame game. It’s about time we point the finger at the real problem here – the people running the show.

Men are so quick to blame it on the robots.

SpaceX’s lawsuit

It’s an interesting choice Elon Musk has made to meme-ify image-based sexual harassment when his company SpaceX is facing a law suit for sexual harassment and discrimination.

In January, the California civil rights department informed SpaceX of seven complaints made by former employees at the rocket-making company. The complaints were in relation to managers nurturing a hostile work environment which allowed jokes about sexual harassment to go unnoticed. According to the accusations, women were paid less than men at the organisation, and any employee who complained about the conditions was dismissed.

Last week, Bloomberg broke the story that, as a result of those complaints, SpaceX is being sued for sexual harassment and discrimination.

Did Musk miss that memo? Because I don’t think prompting AI to alter an image to make a woman’s breasts bigger is helping the case, nor is making a meme about it.

Bad timing on that meme, bro.

Women’s bodies and AI technologies

Elon Musk owns one of the world’s biggest social media platforms. Ultimately, this guy gets to decide what goes on the platform and what stays off.

Last week, Women’s Agenda published an article about a woman who was kicked out of a shopping mall for wearing a midriff top. In the article, the main image showed a picture of her stomach.

When we posted the stories on social media, we ran into a problem. The article was blocked and unable to be posted on X.

Why? Because of the main image. Because of the woman’s stomach.

To be clear: AI-generated pornographic images of women are able to be widely distributed on the platform, seen 47 million times before Musk and the team at X notices a problem. But a woman’s belly? Not ok.

AI-generated porn? Yes. Tummies? Absolutely not.

Women’s bodies are still being regulated by men – online and beyond. The men running the online world don’t see a problem with deep fake images, because it doesn’t affect them: rather, they see it curated for their pleasure and their pleasure only, because “boobs rock”, right? But stuff like this can ruin names, reputations, lives and so much more. 

Of course, regulation on the technology itself is important. Giving people the ability to create this dangerous content gives people the choice to create this dangerous content. That’s why so many women in AI are calling for more regulation and a stronger gender lens in government regulation of AI.

But don’t try to tell me it’s a robot’s fault. Because Musk’s poor attempt at being funny speaks volumes to how men in these spaces (still) disrespect women. 

Maybe it’s not giving people the choice to create the dangerous content that is the problem. Maybe it’s the fact we’re letting them get away with it.

The post Elon Musk made a meme about the sexual exploitation of women’s bodies online. So I made some memes about him. appeared first on Women's Agenda.

]]>
https://womensagenda.com.au/latest/soapbox/elon-musk-made-a-meme-about-the-sexual-exploitation-of-womens-bodies-online-so-i-made-some-memes-about-him/feed/ 0
‘Egregious invasion of privacy’: Taylor Swift’s name blocked on X after sexually explicit deepfakes go viral https://womensagenda.com.au/latest/egregious-invasion-of-privacy-taylor-swifts-name-blocked-on-x-after-sexually-explicit-deepfakes-go-viral/ https://womensagenda.com.au/latest/egregious-invasion-of-privacy-taylor-swifts-name-blocked-on-x-after-sexually-explicit-deepfakes-go-viral/#respond Mon, 29 Jan 2024 05:37:50 +0000 https://womensagenda.com.au/?p=74452 Sexually explicit deep fake images of Taylor Swift have been circulated on X, sparking grave concerns over the growth of AI.

The post ‘Egregious invasion of privacy’: Taylor Swift’s name blocked on X after sexually explicit deepfakes go viral appeared first on Women's Agenda.

]]>
Sexually explicit deep fake images of Taylor Swift have been circulated on Elon Musk’s social media platform X, sparking grave concerns over the growth of artificial intelligence (AI).

X Corp. (formerly known as Twitter) responded to the incident on Sunday night by removing the images and the account that first published the deepfakes, as well as temporarily blocking users’ ability to search “Taylor Swift” on the platform.

“This is a temporary action and done with an abundance of caution as we prioritise safety on this issue,” said Joe Benarroch, head of business operations at X.

The pop star’s name is still blocked on X, resulting in an error when trying to search her name.

Typing “Taylor Swift” into the X search bar results in an error message. Credit: Women’s Agenda

According to a report from The New York Times, one of the several images that were in circulation was viewed 47 million times before the deepfake, along with the account that published it, was removed from X.

In a news briefing on Friday, White House press secretary Karine Jean-Pierre called on Congress to take legislative action against the abuse and misuse of AI technologies online, but also urged social media platforms to take greater measures to regulate content.

“This is very alarming. And so, we’re going to do what we can to deal with this issue,” Jean-Pierre said.

“We know that lax enforcement disproportionately impacts women and they also impact girls, sadly, who are the overwhelming targets.

“We believe they (the platforms) have an important role to play in enforcing their own rules to prevent the spread of misinformation and non-consensual, intimate imagery of real people.”

The creation and distribution of deepfake AI images has been widely regarded as a form of gender-based violence, as it disproportionately targets women and girls online.

In 2019, a study by Deeptrace, a cyber security company, found 96 per cent of deepfake videos online were of an intimate or sexual nature. The people depicted in the AI-generated content were primarily women actors, musicians and media professionals.

‘Extremely harmful content’

Australia’s eSafety Commissioner Julie Inman Grant spoke to Women’s Agenda, explaining how easy it is to create deepfakes and how devastating it can be for people.

“Deepfakes, especially deepfake pornography, can be devastating to the person whose image is hijacked and altered without their knowledge or consent, no matter who they are,” Commissioner Inman Grant said.

“Image-based abuse, including deepfake porn, is persistent online harm which also represents one of the most egregious invasions of privacy.”

Generative AI is user-friendly and widely accessible to people. Inman Grant said something that would previously have taken large software and computing power to generate now can be generated with a click of a button.

“As a result, it’s becoming harder and harder to tell the difference between what’s real and what’s fake. And it’s much easier to inflict great harm,” Inman Grant said.

Australia’s online safety regulatory body, eSafety, lists the use of AI to create sexually explicit deepfake images as “image-based abuse”. Online users can report image-based abuse on eSafety’s website.

While eSafety has a 90 per cent success rate in getting deepfakes and other abusive material down from online sites, including social media platform X, Commissioner Inman Grant called on the “purveyors and profiteers of AI” to do more.

“We’re not going to regulate or litigate our way out of this – the primary digital safeguards must be embedded at the design phase and throughout the model development and deployment process,” she said.

“And platforms need to be doing much to detect, remove and prevent the spread of this extremely harmful content.”

Earlier this month, the eSafety Commission released a transparency report, revealing massive staff cuts at X Corp around the world.

According to the report, the global Trust and Safety staff was reduced by 30 per cent, while the Trust and Safety staff in the Asia Pacific region, including Australia, had a 45 per cent reduction.

Between November 2022 and May 2023, there were 6,103 previously banned accounts on Twitter that were reinstated on X.

At the time of the report’s release, eSafety commissioner Julie Inman Grant said Elon Musk’s staff cuts at X Corp. had created a “perfect storm” for the platform.

eSafety urges those concerned about the non-consensual sharing of images to report to eSafety at www.esafety.gov.au/Report.

The post ‘Egregious invasion of privacy’: Taylor Swift’s name blocked on X after sexually explicit deepfakes go viral appeared first on Women's Agenda.

]]>
https://womensagenda.com.au/latest/egregious-invasion-of-privacy-taylor-swifts-name-blocked-on-x-after-sexually-explicit-deepfakes-go-viral/feed/ 0