Google’s work developing an artificial intelligence tool that would produce news articles is concerning some digital experts, who say such devices risk inadvertently spreading propaganda or threatening source safety.
The New York Times reported last week that Google is testing a new product, known internally by the working title Genesis, that employs artificial intelligence, or AI, to produce news articles.
Genesis can take in information, like details about current events, and create news content, The New York Times reported. Google already has pitched the product to organizations including The Washington Post, The New York Times, and News Corp, which owns The Wall Street Journal.
The launch of the generative AI chatbot ChatGPT last fall has sparked a flurry of debate about how artificial intelligence can and should fit into the world — including in the news industry.
AI tools can help reporters research by quickly analyzing data and extracting data from PDF files in a process known as scraping. AI can also help journalists’ fact-check sources.
But the apprehension — including potentially spreading propaganda or ignoring the nuance humans bring to reporting — appears to be weightier. These worries extend beyond Google’s Genesis tool to encapsulate the use of AI in news gathering more broadly.
If AI-produced articles are not carefully checked, they could unwittingly include disinformation or misinformation, according to John Scott-Railton, who researches disinformation at the Citizen Lab in Toronto.
“It’s sort of a shame that the places that are the most friction-free for AI to scrape and draw from — non-paywalled content — are the places where disinformation and propaganda get targeted,” Scott-Railton told VOA. “Getting people out of the loop does not make spotting disinformation easier.”
Paul M. Barrett, deputy director at New York University's Stern Center for Business and Human Rights, agrees that artificial intelligence can turbocharge the dissemination of lies and falsehoods.
“It’s going to be easier to generate myths and disinformation,” he told VOA. “The supply of misleading content is, I think, going to go up.”
In an emailed statement to VOA, a Google spokesperson said, “in partnership with news publishers, especially smaller publishers, we’re in the earliest stages of exploring ideas to potentially provide AI-enabled tools to help their journalists with their work.”
“Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity,” the spokesperson said. “Quite simply these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating, and fact-checking their articles.”
The implications for a news outlet’s credibility are another important consideration regarding the use of artificial intelligence.
News outlets are presently struggling with a credibility crisis. Half of Americans believe that national news outlets try to mislead or misinform audiences through their reporting, according to a February report from Gallup and the Knight Foundation.
“I’m puzzled that anyone thinks that the solution to this problem is to introduce a much less credible tool, with a much shakier command of facts, into newsrooms,” said Scott-Railton, who was previously a Google Ideas fellow.
Reports show that AI chatbots regularly produce responses that are entirely wrong or made up. AI researchers refer to this habit as a “hallucination.”
And still, some worries remain blurry question marks.
For instance, digital experts are cautious about what security risks may be posed by using AI tools to produce news articles — anonymous sources who may face retaliation if their identity is revealed, for instance.
“All users of AI-powered systems need to be very conscious of what information they are providing to the system,” Barrett said.
“The journalist would have to be cautious and wary of disclosing to these AI systems information such as the identity of a confidential source, or, I would say, even information that the journalist wants to make sure doesn't become public,” he continued.
Scott-Railton said he thinks AI probably has a future in most industries, but it’s important not to rush the process, especially in news.
“What scares me is that the lessons learned in this case will come at the cost of well-earned reputations, will come at the cost of factual accuracy when it actually counts,” he said.