The spread of slick videos that falsely depict U.S. broadcasters making misleading comments or endorsing cryptocurrency and other goods is alarming analysts.
The use of what are known as deepfakes — videos created with artificial intelligence to realistically impersonate people, including journalists — is viewed as especially troubling in a year when more than 40 countries are due to hold significant elections.
At a time of rampant disinformation and low trust in news, and as news outlets grapple with how to integrate AI into their business models, such videos make it even harder for audiences to determine whether what they are seeing and hearing is real.
Deepfakes, especially of journalists, muddy the difference between what is and isn't real, according to Paul Barrett, deputy director of New York University's Stern Center for Business and Human Rights. Barrett's research focuses on technology's effects on democracy.
With artificial intelligence, individuals can make videos of real anchors reporting fake stories.
As a result, deep fakes "generally blur the line between real and factual on the one hand, and false and misleading on the other, with the ultimate goal of just eroding people's trust in what's going on, politically or otherwise, in their countries and around the world," Barrett told VOA.
It can be hard to trace where the videos originate — and harder still to have the content successfully removed from social media platforms.
Deepfakes have targeted journalists at news outlets including CNN, CBS, the BBC and VOA, impersonating prominent journalists including Anderson Cooper, Clarissa Ward and Gayle King.
Reporters with VOA's Russian Service have been sporadically targeted with deepfakes since October.
One of those, Ksenia Turkova, said she was shocked when she first saw a video showing her saying things that she knew she had never said.
"I felt vulnerable, and I felt that my reputation, my trustworthiness as a journalist can be in danger," Turkova told VOA in October.
In the months since, Turkova said, the deepfakes continued to appear on Facebook, to the point that she has stopped paying attention to them.
"At some point, I just stopped following it," she told VOA last week. "I don't have time to follow it."
For Vincent Berthier, the head of the technology desk at Reporters Without Borders, manipulation of trust is central to why journalists are targeted.
There are typically two goals with deepfakes of reporters, Berthier told VOA. The first is to sow distrust in the media, and the second is to leverage the trust people have in the media to make them believe disinformation.
"Deepfakes are actually a weapon against journalism," he told VOA from Paris. "It's not only bad for press freedom in general. It's also a problem for access to information."
Part of the problem, he said, is the ease with which people can create fake content that is believable.
"There is stuff that innovation should not allow, because it's dangerous for democracy, and if it's dangerous for democracy, it's dangerous for us all," he said.
These concerns were raised last week at a Senate hearing about AI's effects on journalism.
While noting the uses of advanced technology to assist local newsrooms in impactful news coverage, Curtis LeGeyt, president of the National Association of Broadcasters, warned that generative AI can also be abused to spread misinformation and disinformation.
"The use of AI to doctor, manipulate, or misappropriate the likeness of trusted radio or television personalities risks spreading misinformation, or even perpetuating fraud," he told the Senate Judiciary Subcommittee on Privacy, Technology, and the Law.
"All our local personalities have is the trust of their audiences,” LeGeyt said, adding that disinformation and deepfakes can undermine that trust.
The misuse of AI and deepfakes could be particularly harmful around elections, the Stern Center's Barrett said.
It's not a stretch, Barrett said, that someone could create a deepfake of a political opponent to harm them in the polls, or a deepfake of a journalist reporting a fake story about a political opponent.
"There's lots of ways that trust and faith in democratic institutions can be eroded by generative AI," Barrett said.
Similar concerns were cited last year by Nobel Peace Prize laureate and journalist Maria Ressa, who warned that the world will know whether democracy "lives or dies" by the end of 2024.
"If we don't have integrity of facts, we cannot have integrity of elections," she said at a Washington event.
When considering the threat posed by deepfakes, Barrett said it's also important to look at the social media platforms where videos are disseminated, rather than just the AI companies.
"The real danger this year, this moment, is not about how problematic content is created. It is still about how problematic content is distributed," he said.
Meta, TikTok and Twitter did not reply to VOA’s emails requesting comment for this article.
A Google spokesperson directed VOA to a November blog post from YouTube, which said the video platform will introduce updates “over the coming months” that will “require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools.”
A YouTube policy bans content that intends to impersonate people.
Some people are also concerned about how artificial intelligence will be used in newsrooms.
While some media analysts point to how AI can offer a cost-effective boon to investigative journalism, others are more wary.
Additionally, distrust in media is already at an all-time high in the United States, and a survey of media leaders by the Reuters Institute for the Study of Journalism found that 70% think the rise of generative AI will have a negative impact on trust in news.
The respondents to the study cited the use of AI in content creation as posing the biggest danger, compared with lower levels of concern about the use of AI in things like coding and distribution, according to the January report.