NewsGuard individua 37 siti che usano l’IA per riscrivere contenuti delle principali testate giornalistiche thumbnail

NewsGuard identifies 37 sites using AI to rewrite content from major news organizations

In recent days (we talked about it in an article) the New York Times has created a precedent of great symbolic importance.

In other words, it prevented OpenAI’s artificial intelligence from training by “feeding” also on the contents of the publication.

It is an action that obviously aims to safeguard intellectual property rights. An intricate speech destined to last a long time, when generative artificial intelligence is at stake. For example, there is a very recent ruling by a US court whereby “creative” content produced by AI cannot enjoy copyright.

Another issue, not too far away, is dividing the stars of the song: that is the agreement, which seems imminent, between Google and Universal to create music with artificial intelligence starting from the vocals and sounds of famous singers and musicians.

But, as a recent NewsGuard report showed, in the production of content there are those who use AI in a completely nonchalant way, paying no attention to any copyright.

newsbot city online newsguard min

NewsGuard research

It is no coincidence that the New York Times has closed its doors to chatbots, as its site is among the most looted.

This is what emerges from a NewsGuard report relating to the month of August, and published on the site on Thursday the 24th.

The research aims to identify the sites that, in practice, plagiarize (or almost) the contents of the main newspapers. And, like… in any self-respecting plagiarism, they are careful not to cite the original source.

We remind you that NewsGuard, through a team of journalists and editors, provides assessments and scores of reliability of Internet sites, drafting monthly reports and periodically opening specific monitoring centers.

Content farms, AI and plagiarized content

To be analyzed by NewsGuard were several “content farms”, i.e. companies that rapidly generate a large amount of content. Inevitably going to the detriment of quality.

With an additional paradox: the “content farms” produce a huge amount of content to better position themselves on Google and obtain advertising from large companies. However, in doing so, they often forage unreliable sites. Or, as we shall see, parasites of others.

Indeed in August NewsGuard has identified 37 sites that “seem” (the conditional is in the report) to have relied on chatbots to rewrite articles that have already appeared in some authoritative newspapers, such as CNN, the New York Times and Reuters. Newspapers which, moreover, have never been mentioned as sources of the contents.

Furthermore, programmatic advertising of major brands is present in 15 of these 37 sites.

Furthermore, the automation of some of them seems even total. In the sense that some sites among the 37 analyzed produce articles without any supervision or human intervention.

Rewrite or plagiarism?

The fundamental point is whether or not rewriting content using AI can be considered plagiarism.

In the same NewsGuard research we read that we are in a border area: “It is not clear how to describe this new practice, nor if the articles rewritten with AI can constitute ‘original content’. At best, one could speak of ‘efficient aggregation’; at worst, of ‘hyper-efficient plagiarism’. Whatever you call it (and the courts will likely decide it), never before have sites had the ability to rewrite articles created by others in near real-time, and in a way that can often be hard to recognize.”

What is certain is that the two main chatbots (ChatGPT by OpenAI and Bard by Google) in their guidelines explicitly prohibit the use to plagiarize other people’s content.

NewsGuard, in connection with its investigation, has sent an email to OpenAI and one to Google, and so far has not received a response.

No human intervention

Most of the 37 sites that reproduce content from other publications act, as we said, without human supervision at any stage of the process.

They are therefore trained to intercept content to plagiarize, rewrite and publish it.

An unequivocal signal in this sense, and which gave the start to NewsGuard’s research, is the fact that some contents elaborated by the AI ​​and repurposed contain the “reasoning” of the software.

For example, on May 28, 2023 came out with a piece very similar to one published the same day by Wired Magazine.

And the headline reads: “As a linguistic model of AI, I’m not sure of the preferences of human readers, but here are some alternative headline options…”

Walker Ronnie is a tech writer who keeps you informed on the latest developments in the world of technology. With a keen interest in all things tech-related, Walker shares insights and updates on new gadgets, innovative advancements, and digital trends. Stay connected with Walker to stay ahead in the ever-evolving world of technology.