Accessibility links

Breaking News

2024 US Election

FILE - Information released late Wednesday by U.S. intelligence officials indicates Iranian cyber actors not only tried to leak stolen Trump campaign documents to media organizations but tried to feed them to Biden campaign officials.
FILE - Information released late Wednesday by U.S. intelligence officials indicates Iranian cyber actors not only tried to leak stolen Trump campaign documents to media organizations but tried to feed them to Biden campaign officials.

Iran’s efforts to upend U.S. politics ahead of November’s presidential election by targeting the campaign of former President Donald Trump went well beyond a standard hack-and-leak operation.

According to U.S. intelligence officials, Tehran sought to ensnare the campaign of Trump’s then-opponent, incumbent U.S. President Joe Biden.

Information released late Wednesday by U.S. intelligence officials indicates Iranian cyber actors not only tried to leak stolen Trump campaign documents to media organizations but also tried to feed them to Biden campaign officials, hoping the Biden team might try to use them.

“Iranian malicious cyber actors in late June and early July sent unsolicited emails to individuals then associated with President Biden’s campaign that contained an excerpt taken from stolen, nonpublic material from former President Trump’s campaign as text in the emails,” according to a statement by the FBI, the Office of the Director of National Intelligence and the Cybersecurity and Infrastructure Security Agency.

“There is currently no information indicating those recipients replied,” the statement added, noting the Iranian hackers have continued to peddle the stolen information to U.S. media organizations.

“The FBI has been tracking this activity, has been in contact with the victims, and will continue to investigate and gather information in order to pursue and disrupt the threat actors responsible,” the statement said.

Earlier this month, a U.S. intelligence official warned that Tehran is “making a greater effort than in the past to influence this year's elections.”

Those efforts included what the official described as a “multipronged approach to stoke internal divisions and undermine voter confidence” that has included attacks on Trump, the Republican presidential nominee, as well as Vice President Kamala Harris, who became the Democrats’ presidential nominee after Biden ended his campaign in late July.

Iran's mission to the United Nations has not yet responded to a request from VOA for comment. It has previously denied involvement in any attempts to interfere with U.S. elections.

“This is further proof the Iranians are actively interfering in the election to help Kamala Harris and Joe Biden because they know President Trump will restore his tough sanctions and stand against their reign of terror,” Trump campaign national press secretary Karoline Leavitt told VOA in an email.

The Harris campaign told VOA in an email that it has cooperated with law enforcement since it was made aware of the Iranian activities. “We’re not aware of any material being sent directly to the campaign,” said campaign spokesperson Morgan Finkelstein.

“A few individuals were targeted on their personal emails with what looked like a spam or phishing attempt,” Finkelstein said. “We condemn in the strongest terms any effort by foreign actors to interfere in U.S. elections, including this unwelcome and unacceptable malicious activity.”

The Trump campaign first announced the suspected hack last month, initially blaming "foreign sources hostile to the United States." U.S. intelligence officials attributed the attack to Iran about a week later.

An unclassified U.S. assessment issued earlier this month cautioned, “Iran has a suite of tools at its disposal.”

“Beyond attempts to hack and leak information, Iran is conducting covert social media operations using fake personas and using AI to help publish inauthentic news articles,” it added.

Private technology companies have likewise warned about Iran’s activities.

In a report issued just days before the Trump campaign said it had been hacked by Iran, Microsoft said Tehran-linked actors were already seeding the online space for influence operations and potential cyberattacks.

But Microsoft President Brad Smith on Wednesday indicated Iranian preparations began even earlier.

"We've seen, starting in May, increasingly sophisticated Iranian activity to penetrate network accounts," Smith told a cyber summit in Washington. "It's a classic prelude to hack-and-leak operations. If you can steal the email in June, you can use it in October and you can even change the email."

California Governor Gavin Newsom attends Day 2 of the Democratic National Convention in Chicago, Aug. 20, 2024. Newsom this week signed three pieces of legislation restricting the role that artificial intelligence, especially in election campaigns.
California Governor Gavin Newsom attends Day 2 of the Democratic National Convention in Chicago, Aug. 20, 2024. Newsom this week signed three pieces of legislation restricting the role that artificial intelligence, especially in election campaigns.

In a step that could have broad implications for future elections in the U.S., California Governor Gavin Newsom this week signed three pieces of legislation restricting the role that artificial intelligence, specifically deepfake audio and video recordings, can play in election campaigns.

One law, which took effect immediately, makes it illegal to distribute "materially deceptive audio or visual media of a candidate" in the 120 days leading up to an election and in the 60 days following an election.

Another law requires that election-related advertisements using AI-manipulated content provide a disclosure alerting viewers or listeners to that fact.

The third law requires that large online platforms take steps to block the posting of "materially deceptive content related to elections in California," and that they remove any such material that has been posted within 72 hours of being notified of its presence.

"Safeguarding the integrity of elections is essential to democracy, and it's critical that we ensure AI is not deployed to undermine the public's trust through disinformation — especially in today's fraught political climate," Newsom said in a statement.

"These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI."

While California is not the only state with laws regulating the use of deepfakes in political ads, the application of the ban to 60 days following the election is unique and may be copied by other states. Over the years, California has often been a bellwether for future state laws.

Tech titan opposition

Social media platforms and free speech advocates are expected to challenge the laws, asserting that they infringe on the First Amendment's protection of freedom of expression.

One high-profile opponent of the measures is Elon Musk, billionaire owner of the social media platform X, who has been aggressively using his platform to voice his support of Republican presidential nominee Donald Trump.

In July, Musk shared a video that used deepfake technology to impersonate the voice of Vice President Kamala Harris. In the video, the cloned voice describes Harris as a "deep state puppet" and the "ultimate diversity hire."

On Tuesday, after Newsom signed the new laws, Musk once again posted the video, writing, "The governor of California just made this parody video illegal in violation of the Constitution of the United States. Would be a shame if it went viral."

Federal action considered

Most of the legislative efforts to regulate AI in politics have, so far, been happening at the state level. This week, however, a bipartisan group of lawmakers in Congress proposed a measure that would authorize the Federal Election Commission to oversee the use of AI by political campaigns.

Specifically, it would allow the agency to prohibit campaigns from using deepfake technology to make it appear that a rival has said or done something that they did not do or say.

During an appearance at an event sponsored by Politico this week, Deputy U.S. Attorney General Lisa Monaco said there was a clear need for rules of the road governing the use of AI in political campaigns, and she expressed her confidence that Congress would act.

While AI promises many benefits, it is also "lowering the barrier to entry for all sorts of malicious actors," she said. "There will be changes in law, I'm confident, over time," she added.

Minimal role in campaign so far

Heading into the 2024 presidential campaign, there was widespread concern that out-of-control use of deepfake technology would swamp voters in huge amounts of misleading content. That hasn't really happened, said PolitiFact editor-in-chief Katie Sanders.

"It has not turned out the way many people feared," she told VOA. "I don't know that it's entirely good news, because there's still plenty of misinformation being shared in political ads. It's just not generated by artificial intelligence. It's really relying on the same tricks of exaggerating where your opponent stands or clipping things out of context."

Sanders said that campaigns might be reluctant to make use of deepfake technology because voters "are distrustful of AI."

"Where the deepfake material that does exist is coming from is smaller accounts, anonymous accounts, and is sometimes catching enough fire to be shared by people who are considered elites on political platforms," she said.

Load more

XS
SM
MD
LG