Americans face flood of AI & deepfake propaganda, says ex-White House intel boss

admin
8 Min Read

China and Russia are ‘already working on hijacking the presidential elections using AI tools’

THE chilling rise of deepfakes and AI will change the face of the 2024 US presidential elections, the White House’s former intelligence chief has warned.

Theresa Payton said American voters will face a flood of propaganda in a bid to sway electoral campaigns, blurring the lines between reality and fiction.

Doctoring photos, video and audio through artificial intelligence has never been easier – and the results can be extremely deceiving.

AI is now being leveraged to influence politics, fuelled by the rise of cheap, intuitive, and effective generative AI tools such as ChatGPT and MidJourney.

“Deepfakes are absolutely going to be front and centre of the US presidential elections,” Theresa told The Sun.

“Propagandists could use AI to create fake documents, audio, and videos about candidates, influencing public opinion and potentially causing division within society.

“Notable people who have influence could be shown saying or doing something that they never did, spreading disinformation and misinformation.”

And the threat is not just from within.

In a world connected by social media, external actors such as Russia, China, and North Korea will try to disrupt the democratic process, Theresa warned.

“I don’t doubt for a moment China and Russia are already working on hijacking the presidential elections using AI tools,” she said.

“They will play a predominant role, leveraging everything at their disposal to disrupt the election process.

“They have an interesting bench of operatives who are trained in doing propaganda on their own citizens.

“And when it’s time for our elections, they will start training on American politics to disrupt the process.”

Before the popular rise of AI, doctoring videos and images with great accuracy required expensive software and high levels of technological knowledge.

But the tools have become increasingly accessible to the public – and almost anyone can create a fake image, video or audio clip.

“You don’t have to be a programmer anymore to leverage AI algorithms,” Theresa said.

“You just have to be creative in your prompt engineering. Once you think about what you want, you can just get it out using the prompt engineering.

“The user-friendliness of generative AI empowers anyone without a technical background to go on and create deepfakes.”

All it takes is a $60 subscription to AI tools like MidJourney or Dall-E to generate synthetic media.

And that is why deepfakes and altered multimedia have flooded the internet – more in the past year – attacking top politicians to build certain narratives.

According to DeepMedia, a company working on tools to detect altered media, the creation and circulation of video deepfakes of all kinds increased threefold in 2023, when compared to the previous.

The same is the case with voice deepfakes, which increased a whopping eightfold in the same period.

In May, former president Donald Trump himself shared a deepfake of his interview with CNN.

The doctored clip showed CNN anchor Anderson Cooper saying: “That was President Donald J. Trump ripping us a new a**hole here on CNN’s live presidential townhall.”

While the lip movement of the anchor does not match the words being said, the video is still convincing for those who are not aware of the likes of deepfakes.

Two months ago, US President Joe Biden was attacked with a deepfake video falsely claiming he is a paedophile – an incident that forced an urgent investigation into Meta’s policies for manipulative content.

While the seven-second clip was not AI-generated, it was still leveraged to manipulate people’s opinions.

Meta refused to take the video down as “it did not violate its policies against manipulated media” because those rules only apply to “AI-generated videos that alter someone’s speech”.

But thousands of other misleading posts – including deepfakes generated by AI – are being pushed to social media, making it difficult for fact-checkers to verify fact and fiction.

“It is quite difficult for fact-checkers to verify false information that is generated by AI,” Theresa said.

“The unprecedented rate of creation of such deepfakes makes the job even more challenging. Social media is being bombarded with such synthetic media.

“What makes it even harder is the fact that one can actually take AI models and train them on massive data sets of text and images and generate content based on real content that exists on the internet.

“They actually start with the kernel of the truth, and then they wrap misinformation and disinformation around it, so it’s hard to discern what is real and what is fiction.”

All this adds up to the advantage of propagandists eyeing to disrupt opposing campaigns in the 2024 elections.

“AI-generated content is extremely realistic and convincing,” Theresa said.

“And now with the algorithms, it can be tailored to very specific audiences, making it that much harder to detect.

“It is being created at an unprecedented rate and the outcomes could potentially be incredibly disruptive.”

According to the Theresa, propagandists will start early next year to influence people to be disenfranchised and not vote, or vote a certain way, using generative AI.

“They could generate a closed-door meeting of the Biden administration planning to do something illegal,” she said.

“Likewise, one could make up fake documents, fake audio, fake video about the candidates on the Republican side of the aisle.

“There is so much footage of people in the administration – raw content to create deepfakes that would be incredibly hard to discern whether they are real or not.

“If you can dream it, the current technology can make it possible.”

And the issue is not only with propaganda artists using deepfakes.

People tend to believe and spread misinformation more if it serves their perception and ideals.

“A lot of where manipulative propaganda propagates is actually in these private messaging groups with like-minded people,” Theresa said.

“You just have to get one manipulator. Get one fish on the hook, and then they go round everybody up.”

It appears lawmakers in the US are not prepared to fight what could potentially wreak havoc on their biggest democratic process.

The government still hasn’t taken steps to tackle deepfakes because of the long process of getting it through the First Amendment Rights and then the bulk of legal challenges.

And since generative AI and deepfakes are relatively new – and are growing exponentially – most lawmakers are still struggling to respond to them.

Share This Article
By admin
test bio
Please login to use this feature.