Google is testing a product that uses artificial intelligence technology to produce news stories, pitching it to news organizations including The New York Times, The Washington Post and The Wall Street Journal’s owner, News Corp, according to three people familiar with the matter.
The tool, known internally by the working title Genesis, can take information — details about current events, for example — and generate news copy, the people said, speaking on condition of anonymity to discuss the product.
One of the three people familiar with the product said Google believed it could serve as a kind of personal assistant for journalists, automating some tasks to free up time for others, and that the company saw it as a responsible technology that could help drive the publishing industry away from the puzzles of generative AI
Some executives who saw Google’s proposal described it as disturbing, asking not to be identified discussing a confidential matter. Two people said it seemed to take for granted the effort that went into producing accurate and artistic news.
A Google spokesperson did not immediately respond to a request for comment. The Times and The Post declined to comment.
“We have an excellent relationship with Google, and we appreciate Sundar Pichai’s long-term commitment to journalism,” a News Corp spokesman said in a statement, referring to the Google chief executive.
Jeff Jarvis, a journalism professor and media commentator, said that Google’s new tool, as described, has potential advantages and disadvantages.
“If this technology can deliver factual information reliably, journalists should use the tool,” said Mr. Jarvis, director of the Tow-Knight Center for Entrepreneurial Journalism at the Craig Newmark Graduate School of Journalism at the City University of New York.
“If, on the other hand, it is misused by journalists and news organizations on issues that require nuance and cultural understanding,” he continued, “then it could damage the credibility not only of the tool but of the news organizations that use it.”
News organizations around the world are grappling with whether to use artificial intelligence tools in their newsrooms. Many, including The Times, NPR and Insider, have notified staff that they intend to explore possible uses of AI to see how it could be responsibly applied to the high-stakes realm of news, where seconds count and accuracy is paramount.
But Google’s new tool is sure to induce anxiety among journalists who have been writing their own articles for centuries as well. Some news organizations, including The Associated Press, have long used AI to generate stories on issues including corporate earnings reports, but they remain a small fraction of the service’s articles compared to those generated by journalists.
Artificial intelligence could change that, enabling users to generate articles on a wider scale that, if not edited and carefully controlled, could spread misinformation and affect how traditionally written stories are perceived.
While Google has moved quickly to develop and deploy generative AI, the technology has also presented some challenges to the ad jersey. While Google has traditionally played the role of curating information and sending users to publisher websites to read more, tools like its chatbot, Bard, present factual claims that are sometimes incorrect and don’t send traffic to more authoritative sources, such as news publishers.
The technology was introduced as governments around the world called on Google to give news outlets a bigger share of its advertising revenue. After the Australian government tried to force Google to negotiate with publishers on payments in 2021, the company forged more partnerships with news organizations in various countries, under its News Showcase program.
Publishers and other content creators have already criticized Google and other major AI companies for using decades of their articles and posts to help train these AI systems, without compensating the publishers. News organizations including NBC News and The Times have taken a stand against AI siphoning their data without permission.