Accessibility links

Breaking News

2024 US Election

FILE - The letters AI (for artificial intelligence) and a robot's hand are seen against the backdrop of an illuminated computer motherboard in this illustration photo taken June 23, 2023.
FILE - The letters AI (for artificial intelligence) and a robot's hand are seen against the backdrop of an illuminated computer motherboard in this illustration photo taken June 23, 2023.

Russia, Iran and China are not giving up on the use of artificial intelligence to sway American voters ahead of November’s presidential election even though U.S. intelligence agencies assess the use of AI has so far failed to revolutionize the election influence efforts.

The new appraisal released late Monday from the Office of the Director of National Intelligence comes just more than 40 days before U.S. voters head to the polls. It follows what officials describe as a “steady state” of influence operations by Moscow, Tehran and Beijing aimed at impacting the race between former Republican President Donald Trump and current Democratic Vice President Kamala Harris, as well as other statewide and local elections.

"Foreign actors are using AI to more quickly and convincingly tailor synthetic content,” said a U.S. intelligence official, who briefed reporters on the condition of anonymity to discuss the latest findings.

“AI is an enabler,” the official added. “A malign influence accelerant, not yet a revolutionary influence tool.”

It is not the first time U.S. officials have expressed caution about how AI could impact the November election.

A top official at the Cybersecurity and Infrastructure Security Agency (CISA), the U.S. agency charged with overseeing election security, told VOA earlier this month that to this point the malicious use of AI has not been able to live up to some of the hype.

“Generative AI is not going to fundamentally introduce new threats to this election cycle,” said CISA senior adviser Cait Conley. “What we're seeing is consistent with what we expected to see.”

That does not mean, however, that U.S. adversaries are not trying.

The new U.S. intelligence assessment indicates Russia, Iran and China have used AI to generate text, images, audio and video and distribute them across all major social media platforms.

Iran and Russia have yet to respond to requests for comment. Both have previously rejected U.S. allegations regarding election influence campaigns.

The Chinese Embassy in Washington dismissed the U.S. intelligence assessment as “full of prejudice and malicious speculation.”

“China actively advocates the principle of ‘putting people first’ and ‘smart for the good’ to ensure that AI is safe, reliable and controllable,” embassy spokesperson Liu Pengyu told VOA in an email. “We hope that the US adjusts its mindset, assumes the responsibilities of a major country, and stops fabricating and spreading false information.”

While U.S. intelligence officials would not say how many U.S. voters have been exposed to such malign AI products, there is reason to think that some of the efforts are, at least for the moment, falling short.

“The quality is not as believable as you might expect,” said the U.S. intelligence official.

One reason, the official said, is because Russia, Iran and China have struggled to overcome restrictions built into some of the more advanced AI tools while simultaneously encountering difficulties developing their own AI models.

There are also indications that all three U.S. adversaries have to this point failed to find ways to more effectively use AI to find and target receptive audiences.

“To do scaled AI operations is not cheap,” according to Clint Watts, a former FBI special agent and counterterror consultant who heads up the Microsoft Threat Analysis Center (MTAC).

“Some of the infrastructure and the resources of it [AI], the models, the data it needs to be trained [on] – very challenging at the moment,” Watts told a cybersecurity summit in Washington earlier this month. “You can make more of everything misinformation, disinformation, but it doesn't mean they'll be very good.”

In some cases, U.S. adversaries see traditional tactics, which do not rely on AI, as equally effective.

For instance, U.S. intelligence officials on Monday said a video claiming that Vice President Harris injured a girl in a 2011 hit-and-run accident was staged by Russian influence actors, confirming an assessment last week by Microsoft.

The officials also said altered videos showing Harris speaking slowly, also the result of Russian influence actors, could have been done without relying on AI.

For now, experts and intelligence officials agree that when it comes to AI, Russia, Iran and China have settled on quantity over quality.

Microsoft has tracked hundreds of instances of AI use by Russia, Iran and China over the past 14 months. And while U.S. intelligence officials would not say how much AI-generated material has been disseminated, they agree Russian-linked actors, especially, have been leading the way.

“These items include AI-generated content of and about prominent U.S. figures … consistent with Russia's broader efforts to boost the former president's candidacy and denigrate the vice president and the Democratic Party,” the U.S. intelligence official said, calling Russia one of the most sophisticated actors in knowing how to target American voters.

Those efforts included an AI-boosted effort to spread disinformation with a series of fake web domains masquerading as legitimate U.S. news sites, interrupted earlier this month by the U.S. Department of Justice.

Iran, which has sought to hurt the re-election bid by former President Trump, has also copied the Russian playbook, according to the new U.S. assessment, seeking to sow discord among U.S. voters.

Tehran has also been experimenting, using AI to help spread its influence campaign not just in English, but also in Spanish, especially when seeking to generate anger among voters over immigration.

“One of the benefits of generative AI models is to overcome various language barriers,” the U.S. intelligence official said.

“So Iran can use the tools to help do that,” the official added, calling immigration “obviously an issue where Iran perceives they could stoke discord.”

Beijing, in some ways, has opted for a more sophisticated use of AI, according to the U.S. assessment, using it to generate fake news anchors in addition to fake social media accounts.

But independent analysts have questioned the reach of China’s efforts under its ongoing operation known as “Spamouflage.”

A recent report by the social media analytics firm Graphika found that, with few exceptions, the Chinese accounts “failed to garner significant traction in authentic online communities discussing the election.”

U.S. intelligence officials have also said the majority of the Chinese efforts have been aimed not at Trump or Harris, but at state and local candidates perceived as hostile to Beijing.

U.S. intelligence officials on Monday refused to say how many other countries are using AI in an effort to influence the outcome of the U.S. presidential election.

Earlier this month, U.S. Deputy Attorney General Lisa Monaco said Washington was “seeing more actors in this space acting more aggressively in a more polarized environment and doing more with technologies, in particular AI.”

Muslim Americans could flex 'political muscles' in November US elections
please wait

No media source currently available

0:00 0:03:06 0:00

Muslims account for less than 2% of the U.S. population, but as VOA’s Kane Farabaugh reports, Muslim American influence in U.S. elections is growing, driven largely by concerns over the continued war between Israel and Hamas.

Load more

XS
SM
MD
LG