A photo of Donald Trump being arrested, a video showing a bleak future if President Joe Biden is re-elected, or an audio recording of an argument between the two. These social media posts have one thing in common: they are completely fake.
All of these are developed using artificial intelligence (AI), a fast-growing technology. Experts fear it could lead to a flood of misinformation during the 2024 presidential election, undoubtedly the first ballot where its use will be widespread.
Democrats and Republicans alike will be tempted to use AI — cheap, accessible and with smaller legal frameworks — to better woo voters or create pamphlets with their fingers.
But experts fear the tool could also be used to wreak havoc in a divided country, with some voters believing the 2020 election was stolen from former President Donald Trump, despite evidence to the contrary.
In March, he was stopped by police officers for fake AI-generated footage that offered a glimpse of what the 2024 campaign would look like.
Last month, in response to Joe Biden’s candidacy announcement, the Republican Party released a video predicting a dire future if he were re-elected, also via AI.
Realistic images, albeit fake, showed China invading Taiwan or a collapse in financial markets.
Earlier this year, an audio recording of Donald Trump and Joe Biden insulting each other went viral on TikTok. It’s definitely fake, again, made using AI.
For Joe Rosebars, founder of digital agency Blue State, ill-intentioned people are using this technology to create “new tools to incite hatred” and “to booze the press and the public”.
Fighting them will require “awareness from the media, technology companies and voters,” he told AFP.
– “Lie” –
Regardless of the intentions of the person using it, the effectiveness of AI is undeniable.
When AFP asked ChatGPT to create a political newsletter in favor of Donald Trump, feeding him the false information he had spread, the interface drew up a lick of text — filled with lies — within seconds.
And when the robot was asked to make the speech “more aggressive,” it repeated these false claims in an even more doomsday tone.
“Right now, AI is lying a lot,” said Dan Woods, a former official on Joe Biden’s 2020 campaign.
“If our foreign adversaries are to rely only on an already delusional robot to spread disinformation, we should prepare for a much more aggressive disinformation campaign than 2016,” he says.
At the same time, Vance Reavie, the boss of Junction AI, promises that the technology will help better understand voters, especially those who are absent or less likely to vote.
Artificial intelligence “allows us to understand precisely what they’re interested in and why, and we can decide how to engage them and what policies will interest them,” he says.
It also saves campaign teams time writing speeches, tweets or questionnaires to voters.
– “denying the truth” –
But Vance Reavie notes that “a lot of content that’s created is fake.”
Many Americans’ distrust of the mainstream media doesn’t help matters.
“The fear is that if it becomes easier to manipulate the media, it will become easier to deny reality,” said Hani Farid, a professor at the University of California, Berkeley.
“For example, if a candidate says something inappropriate or illegal, they can say the record is false. It’s very dangerous.”
Despite fears, the technology is already in the works. Higher Ground Labs’ Betsy Hoover told AFP her company is developing a program to use AI to write and evaluate the effectiveness of fundraising emails.
This former official told Barack Obama’s campaign in 2012, “Those with bad intentions will use all the tools at their disposal to achieve their goals, and AI is no exception.
“But I don’t believe that fear will stop us from using AI.”
burs-ac/cha-led/lb
“Travel aficionado. Incurable bacon specialist. Tv evangelist. Wannabe internet enthusiast. Typical creator.”