Taylor Swift AI deepfakes: Can the popstar take legal action?

admin
10 Min Read

As technology improves every day, artificial intelligence (AI) problems only get worse.

Robocalls of US president Joe Biden, George Carlin’s new standup comedy special, videos of dead children and teenagers detailing their own deaths, and more have taken over the internet recently; however, none of them were real.

Recently, images of Swift that are extremely offensive and sexually explicit have been circulating on the internet, making her the most well-known victim of a problem that tech firms are battling to solve.

Deepfake pornographic images are spreading quickly online, prompting calls from US lawmakers to ban the practice — which involves using AI to generate fake imagery.

Sexually explicit and abusive fake images of Taylor Swift began circulating widely this week on the social media platform X.

The deepfake-detecting group Reality Defender said it tracked a deluge of nonconsensual pornographic material depicting Swift, particularly on X.

Some images also made their way to Meta-owned Facebook and other social media platforms like Telegram, reported Associated Press. One image of the singer was seen 47 million times on X before it was removed on Tuesday. According to AFP, US media reported that the post was live on the platform for around 17 hours.

X said in a statement, “Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them. Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content.”

The Elon Musk-owned platform added that it was “closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed.”

Meanwhile, Meta said in a statement that it strongly condemns “the content that has appeared across different internet services” and has worked to remove it. “We continue to monitor our platforms for this violating content and will take appropriate action as needed,” the company said.

According to AP, her ardent fanbase of “Swifties” quickly mobilised, launching a counteroffensive on the platform formerly known as Twitter and a #ProtectTaylorSwift hashtag to flood it with more positive images of the pop star. Some said they were reporting accounts that were sharing the deepfakes.

Notably, the incident occurred on the same day when the estate of George Carlin filed a lawsuit in federal court in Los Angeles against the media company to take down a fake hour-long comedy special titled George Carlin: I’m Glad I’m Dead, wherein the comedian, who died in 2008, delivers commentary on current events, according to USA Today.

Swift mulls over legal action

According to The New York Post, Swift is reportedly “furious” that AI-generated pictures of her were making the rounds online and is considering taking legal action against the site that created the photographs.

A source close to the 34-year-old musician said, “Whether or not legal action will be taken is being decided but there is one thing that is clear: these fake AI-generated images are abusive, offensive, exploitative and done without Taylor’s consent and/or knowledge.”

“The door needs to be shut on this. Legislation needs to be passed to prevent this and laws must be enacted,” the source said.

Brittany Spanos, a senior writer at Rolling Stone who teaches a course on Swift at New York University, says Swift’s fans are quick to mobilise in support of their artist, especially those who take their fandom very seriously and in situations of wrongdoing.

“This could be a huge deal if she really does pursue it to court,” AP quoted her as saying.

Spanos says the deep fake pornography issue aligns with others Swift has had in the past, pointing to her 2017 lawsuit against a radio station DJ who allegedly groped her; jurors awarded Swift $1 in damages, a sum her attorney, Douglas Baldridge, called “a single symbolic dollar, the value of which is immeasurable to all women in this situation” in the midst of the MeToo movement. (The $1 lawsuit became a trend thereafter, like in Gwyneth Paltrow’s 2023 countersuit against a skier.)

Renewed calls for US legislation

Federal lawmakers who’ve introduced bills to put more restrictions or criminalise deepfake porn indicated the incident shows why the US needs to implement better protections.

“For years, women have been victims of non-consensual deepfakes, so what happened to Taylor Swift is more common than most people realize,” said Yvette D. Clarke, a Democratic Congresswoman for New York who’s introduced legislation would require creators to digitally watermark deepfake content.

“Generative-AI is helping create better deepfakes at a fraction of the cost,” Clarke said.

US Rep. Joe Morelle, another New York Democrat pushing a bill that would criminalise sharing deepfake porn online, said what happened to Swift was disturbing and has become more and more pervasive across the internet.

“The images may be fake, but their impacts are very real,” Morelle said in a statement. “Deepfakes are happening every day to women everywhere in our increasingly digital world, and it’s time to put a stop to them.”

While several US states have laws against deepfakes, there are rising calls to amend federal law.

The proposed Preventing Deepfakes of Intimate Images Act, introduced by Democratic congressman Joseph Morelle in May 2023, would ban the sharing of deepfake pornography without permission.

The Guardian quoted Morelle as saying the images and videos “can cause irrevocable emotional, financial, and reputational harm – and unfortunately, women are disproportionately impacted.”

He denounced the Swift photos in a tweet, calling them “sexual exploitation.” His suggested legislation is still pending.

The report also quoted Republican congressman Tom Kean Jr saying, “It is clear that AI technology is advancing faster than the necessary guardrails. Whether the victim is Taylor Swift or any young person across our country, we need to establish safeguards to combat this alarming trend.”

In addition to sponsoring Morelle’s bill, he has presented his own AI Labelling Act, which would mandate the labelling of all AI-generated content, including harmless chatbots used in customer service situations.

AI as a bigger threat

Realistic deepfake audio or video has been used to imitate prominent men, especially politicians like Joe Biden and Donald Trump as well as musicians like Drake and the Weeknd.

However, the technology predominately targets women, often in a sexually exploitative manner. According to a 2019 DeepTrace Labs report, which was referenced in US legislation that was proposed, 96 per cent of deepfake video content was non-consenting pornographic material.

Fake pornography has long been a problem, but things have gotten much worse since 2019.

Speaking out against the widespread use of her picture in fake pornography in 2018, Scarlett Johansson said, “I have sadly been down this road many, many times. The fact is that trying to protect yourself from the internet and its depravity is basically a lost cause, for the most part.”

Mason Allen, Reality Defender’s head of growth said the researchers are 90 per cent confident that the images were created by diffusion models, which are a type of generative artificial intelligence model that can produce new and photorealistic images from written prompts.

The most widely known are Stable Diffusion, Midjourney and OpenAI’s DALL-E. Allen’s group didn’t try to determine the provenance.

In the first nine months of 2023, 113,000 deepfake videos have been uploaded to the most prominent porn websites, according to research that Wired magazine cited.

Tech giants’ response

OpenAI said it has safeguards in place to limit the generation of harmful content and “decline requests that ask for a public figure by name, including Taylor Swift.”

Microsoft, which offers an image-generator based partly on DALL-E, said Friday it was in the process of investigating whether its tool was misused. Much like other commercial AI services, it said it doesn’t allow “adult or non-consensual intimate content, and any repeated attempts to produce content that goes against our policies may result in loss of access to the service.”

Asked about the Swift deepfakes on NBC Nightly News, Microsoft CEO Satya Nadella told host Lester Holt in an interview airing Tuesday that there’s a lot still to be done in setting AI safeguards and “it behoves us to move fast on this.”

“Absolutely this is alarming and terrible, and so, therefore, yes, we have to act,” Nadella said.

Share This Article
By admin
test bio
Please login to use this feature.