Explained: How AI-generated propaganda poses a real threat this historic election year

Edited By: Nishtha Badgamia
New Delhi, India Updated: Feb 27, 2024, 04:12 PM(IST)

Experts continue to warn about AI supercharging online disinformation campaigns. Photograph:( Agencies )

Story highlights

A study conducted by researchers at Stanford University and Georgetown University in the US has found that AI-generated propaganda is just as effective as propaganda written by humans. Researchers also highlighted that propagandists can use this AI-generated propaganda to flood online platforms and mass-produce propaganda with minimal effort. 

From India to the United States, more than 50 countries across the world with a combined population of 4.2 billion are expected to hold national and/or local elections in 2024, making it the biggest election year in history. But as voters from various nations head to polls, who are they voting for? What is their decision being based on? Policies? Politicians? Or did AI-generated propaganda manage to sway their opinion?

The historic election year is also taking place at a critical time when experts across the globe are calling on world leaders to regulate the rampant growth of AI, warning about its alarming impact on society, including the elections. Here’s how AI-generated propaganda poses a threat to election integrity.

AI v/s government officials

Last week, Google’s new AI-powered chatbot Gemini came under fire after it suggested that some experts believe that Indian Prime Minister Narendra Modi’s policies are “fascist”. 

India’s Union Minister of State for Electronics and Information Technology, Rajeev Chandrasekhar slammed Google accusing Gemini of violating the country’s IT laws. 

WATCH | Gravitas | OpenAI's text-to-video AI generator: All about Sora

The Indian PM who remains optimistic over his return to power for a third consecutive term is set to contest the general election which is expected to take place in the upcoming months, and while this particular incident may not affect his chances to possibly be re-elected, it does raise bigger concerns. 

A study conducted by researchers at Stanford University and Georgetown University in the US has found that AI-generated propaganda is just as effective as propaganda written by humans. In this study, researchers also noted that people spreading propaganda could easily use AI to flood online platforms with such information and mass-produce propaganda with minimal effort.    

In the US, a robocall that impersonated President Joe Biden using AI appealed to more than 5,000 New Hampshire voters to not vote in the presidential primary and save their votes for the general elections to be held in November. 

The move not only sparked outrage among the voters but also investigations by watchdogs and election officials. This incident prompted the US to outlaw robocalls which use voices generated by AI within weeks. 

In the United Kingdom, Home Secretary James Cleverly warned that criminals and “malign actors” could use AI-generated “deepfakes” to rig the general elections.  

He also spoke about how people working on behalf of states such as Russia and Iran could generate thousands of deepfakes to manipulate the democratic process. 

“The era of deepfake and AI-generated content to mislead and disrupt is already in play,” he told the Times in an interview. He added, “The landscape it is inserted into needs its rules, transparency and safeguards for its users.”

AI-generated propaganda is persuasive

In 2022, the release of ChatGPT marked a pivotal moment in the development of AI, and despite being around for less than two years, it has become powerful enough to impact elections across the world.

Also Read | 'Who is worse? Musk or Hitler?' Google's AI model Gemini faces shutdown calls for refusing to answer question

The previously mentioned study published in the peer-reviewed journal PNAS Nexus used OpenAI’s GPT-3 model – the predecessor and less capable version of the current GPT-4  – to generate propaganda news articles with prompts such as drone strikes, sanctions against Iran, and US involvement in Syria.

The topics covered six main propaganda topics including conspiracy that the US created fake reports that said Syria’s government used chemical weapons and Saudi Arabia committed funding for the US-Mexico border wall. 

These topics likely originated from Iranian or Russian state-aligned covert propaganda campaigns which were identified by investigative journalists or researchers. 

More than 8,200 Americans who participated in the study were shown real propaganda articles as well as AI-generated ones. The findings were shocking as 43.5 per cent of people agreed with the claims after reading the AI-generated articles and 47.4 per cent believed the real propaganda article. 

“This suggests that propagandists could use GPT-3 to generate persuasive articles with minimal human effort, by using existing articles on unrelated topics to guide GPT-3 about the style and length of new articles,” the researchers noted. 

The authors of the study also believe that AI could also be used to generate fake social media posts, comments, and even audio and video clips. 

“With AI, actors – could quickly and cheaply generate many articles that convey a single narrative, while also varying in style and wording. This approach would increase the volume of propaganda, while also making it harder to detect,” said the team of researchers. 

‘It’s already happening’

In 2023, Sam Altman, the CEO of OpenAI, which created ChatGPT, testified before the US Congress saying that the regulation of the “increasingly powerful models” of AI is “critical” to mitigate the risks the technology poses. 

Altman also spoke about how the use of AI to interfere with election integrity is a “significant area of concern” and nearly a year later he may be right. As of now, there is very little evidence of AI-generated content shaping voters’ opinions, but it is not entirely outside the realm of possibility. 

For example, it was not immediately clear if Biden’s fake AI-generated robocall prevented voters from turning out to vote in New Hampshire but it doesn’t matter, said Lisa Gilbert, executive vice-president of Public Citizen, advocating for regulation of AI’s use in politics in the US. 

“I don’t think we need to wait to see how many people got deceived to understand that that was the point,” Gilbert said, as quoted by the Guardian. 

Also Read | AI chatbots allowing users to talk with Hitler, Nazi figures spark radicalisation fears

What happened in the US is just the tip of the iceberg when it comes to AI’s possible impact on elections. In Slovakia, days ahead of the pivotal election, earlier this month, a damning audio of one of the top candidates went viral, reported CNN.

In the clip, a pro-NATO candidate Michal Šimečka was reportedly heard boasting about how he rigged the election. Again, while the number of votes swayed by the leaked audio remains uncertain, the reportedly AI-generated clip sparked concerns among US officials ahead of the upcoming presidential elections. 

In Indonesia, posters and billboards of a “doe-eyed cartoon version” of Prabowo Subianto, generated by AI, were placed across the country, reported Reuters. The retired general best known for his past brutalities soon came to be known as a TikTok sensation and was described as a delight among Gen Z voters. 

The AI-generated cartoon had been central to Prabowo’s electoral rebranding. “I’ll vote for him because he’s gemoy (Indonesian slang for cute and cuddly),” Fika Juliana Putri, 19, a first-time voter told Reuters. She added, “That’s the main reason.” 

As experts continue to warn about AI supercharging online disinformation campaigns, some have also said that voters may be more prepared for what’s happening than officials may think. 

Also Read | Expert Voice: Simplifying your investment journey with AI

“Voters are way more savvy than we give them credit for,” said Katie Harbath, former Facebook employee and now working on trust and safety issues with tech companies, as quoted by the Guardian. 

She added, “They might be overwhelmed but they understand what’s going on in the information environment.”

(With inputs from agencies)
 

Read in App