Google is investing heavily in artificial intelligence, aiming to create a system capable of write news: an AI designed for journalism. As reported by the New York Times, the new tool, tentatively called “Genesis“, is under development and could represent a real revolution in the field of journalism. Even if, in a world where fake news has a growing weight, it remains to be seen whether the revolution will be positive or negative.
Google Genesis, AI designed for journalism
Generative AI has already made its debut in the journalism industry, although still on tiptoe. At CES 2023 we witnessed the writing of the first report with AI, CNET admitted to using bots to write some “simple” news, such as financial reports, while BuzzFeed used ChatGPT to write travel guides. And online there are several newsbot sites, generated entirely by artificial intelligence (and reading even a couple of lines, it is noticeable by the repeated sentences and lack of focus).
Google’s AI, however, aims to provide more accurate reports if you start with enough information about a story. It is not yet clear how this technology will differ from the already existing artificial intelligence systems used to generate articles, or from Google’s AI chatbot, known as Google Bard and recently arrived in Italy. However, what is certain is that Google has already introduced Genesis to major news organizations such as the Washington Post, News Corp and the New York Times itself.
Between potential and risks
According to anonymous sources cited by the Times, Genesis will have the ability to generate news based on event details provided. Its main focus will be to act as personal assistant for journalists, helping them in writing articles and gathering information.
This could simplify the work of reporters, allowing you to spend more time searching for news, asking the right questions (perhaps leaving the task of finding the right captions for social media to the AI). Even if there remains the concrete possibility that these tools allow the newsrooms to decrease the number of journalists, limiting itself to a few staff members who check the veracity of the contents generated by the AI. Improving publishing efficiency rather than content quality.
Furthermore, there are concerns about the use of artificial intelligence for news reporting. These systems have in the past generated misinformation with conviction and authority. This is less common among human journalists, able to critically evaluate and verify the news. Or rather: if a journalist does his job with seriousness and ethics, he will be responsible for what he writes and will avoid the conscious dissemination of false news. While an AI may simply not notice that it has done so.
AI mistakes
Recently, a Georgia radio host sued ChatGPT for defamation after receiving an erroneous summary of a federal court case, incorrectly alleging that he defrauded his employer and embezzled funds. In another case, a lawyer who relied on ChatGPT to prepare legal documents has referring to non-existent casessuggesting that artificial intelligence had completely invented them.
If similar cases were to arise in a widespread newspaper such as the Times, the risk of defamation could be enormous. And the same would be true for false or biased news, which could mislead the public opinion – and undermine the newspaper’s reputation.
Google’s responsibility – and publishers
Google will certainly try to avoid similar problems with Genesisas errors of fact would be particularly serious in an AI system designed for journalism.
Despite the sophistication of Google, an artificial intelligence in the field of journalism it cannot replace interviewing sources, direct experience of events or journalistic investigation. However, as Amanda Yeo points out on Mashable, considering that many journalists often find themselves chained to computers to produce numerous stories each day, some might argue that many journalists don’t get the opportunity to gain first-hand experience anyway.
AI could have a positive impact on the quality of reporting if it becomes a tool to offload reporters so they find important and exciting stories. But it will depend on what publishers can and choose to do with AI, as well as the quality of Google’s software.
And then transparency becomes essential on which content is generated by AI, but also the attention of readers to check more sources.
Source
The New York Times
Leave a Reply
View Comments