With the growth of advanced artificial intelligence and its ability to help spread mis- and disinformation, some media experts believe the journalism industry should adopt uniform standards on the new technology.
Among the questions discussed by newsroom leaders is how AI — which has a record of errors and so-called digital hallucinations, and is already used to create deepfakes — can be ethically relied on by an industry whose credibility depends on trust.
“I think it will take some time for news organizations to develop best practices,” said Jared Schroeder, an associate professor specializing in media law and technology at the University of Missouri School of Journalism.
“There is no set best practices yet and we have two problems: It’s new and it’s changing. We are not done. The AI of today will be different next year and in five years,” he added.
Generative AI poses an interesting dilemma for an industry suffering economic decline. The technology can assist in producing transcripts, editing copy, narrating audio or TV packages and creating images. And investigative news outlets have long relied on it to trawl large data sets.
But it also presents a risk for copyright violations and plagiarism, along with errors.
The New York Times last month sued Open AI and Microsoft for copyright infringement. And an August report by the watchdog NewsGuard found AI chatbots plagiarizing thousands of news articles.
As news organizations start adopting standards and practices, many experts agree that AI is a useful tool for journalism, but that people are still needed to oversee its application.
“As a journalist I am not allowed to use AI to draft my stories or anything like that,” said Ryan Heath, the global technology correspondent for the news website Axios. “It is fine to use it to do a bit of research and prompt you for inspiration, but you cannot use it to do the actual reporting or drafting of your articles,” he told VOA.
News outlets experimenting with using AI in the place of reporters have done so with only limited success.
The U.S. media outlet Sports Illustrated in November was accused of publishing AI-generated content under fake bylines. Sports Illustrated denied the allegations, saying a third party provided the content. It fired senior executives the following month, but denied its decision was connected to the AI allegations.
The outlet was back in the news Friday, announcing mass staff layoffs after losing its licensing agreement in relation to missed payments.
Separately, the tech website CNET’s experiment using AI to assist in producing stories early last year resulted in dozens of articles containing errors. CNET published at least 41 corrections, according to the Verge. The site's then editor-in-chief said in a statement at the time, "We've paused and will restart using the AI tool when we feel confident the tool and our editorial processes will prevent both human and AI errors."
Heath of Axios said his media outlet had adopted a more cautious approach.
“They recognize that it is definitely a big transformation, so they hired people like me to write full time about AI,” he said. “[But] they want to stop and think about it first.”
Axios is not alone in hiring staff to focus on AI. The New York Times last month announced Zach Seward as editorial director of AI initiatives.
In a press release, the publisher said it would be Seward’s job to establish principles for using artificial intelligence at the organization. VOA reached out to the Times to request an interview, but the paper declined.
Others, like The Associated Press, are signing licensing agreements with Open AI, ChatGPT’s maker.
AP also declined VOA’s request for interviews. But in press releases the news agency has said: “Accuracy, fairness and speed are the guiding values for AP’s news report, and we believe the mindful use of artificial intelligence can serve these values and over time improve how we work.”
The implications of AI on newsrooms are a key trend for 2024, especially in a year when more than 40 countries will hold significant elections.
“Embracing the best of AI while managing its risks will be the underlying narrative of the year ahead,” wrote Nic Newman, senior research associate at the Reuters Institute for the Study of Journalism in the organization’s annual media trends report.
Noting that questions on trust and intellectual property are key, Newman added, “Publishers can also see advantages in making their businesses more efficient and more relevant for audiences.”
Newsroom leaders and media watchdogs are weighing in, too.
Nobel laureate Maria Ressa joined with Reporters Without Borders and other groups to release the Paris Charter on AI and Journalism, in November. The charter’s creators say they want it to serve as an ethics blueprint for AI use in journalism and want news organizations to adopt its 10 principles.
But so far, the charter hasn’t been adopted by many news organizations. And the list of journalists using AI is growing.
Pandora’s box has been opened, said Schroeder, adding, “It would be dangerous for journalism to not be thinking about how AI should be used. It doesn’t mean every news organization should use it the same way.”
Some governments seem to share that view.
A U.S. Senate Committee on the Judiciary subcommittee hearing on January 10 looked at reservations about how the technology could impact journalism.
“It is in fact a perfect storm of declining revenue and exploding disinformation, and a lot of the cause of this perfect storm is in fact technologies like artificial intelligence,” said Senator Richard Blumenthal, a Democrat from Connecticut, who chaired the hearing.
And in December the European Union passed the Artificial Intelligence Act, to ensure safe and transparent use of the technology. Included are requirements for tech companies to disclose when content is generated by AI.
Media experts are also emphasizing transparency and human oversight in their discussions of how and when AI is used in journalism.
Editor's note: The 13th paragraph of this article has been updated to clarify CNET's use of AI.