Generative artificial intelligence, i.e. producer of new content, is on everyone’s lips.
And it couldn’t be otherwise: both for the increasingly central role it is having and will have in our lives, and because it is showing increasingly surprising results (think of the video clips of Sora, OpenAI’s latest creation).
A far-sighted volume that we recently reviewed raises the question of how AI should be regulated, for its virtuous and safe use. Needless to say that one of the greatest risks is that artificial intelligence generating images or texts produces new and refined forms of fake newsincreasingly difficult to disprove.
In this sense, the latest NewsGuard report is merciless, identifying as many as 725 pieces of news generated by artificial intelligence, with little or no human review.
The NewsGuard report
In the latest report, NewsGuard tells us that it has “identified 725 AI-generated news and information sites that operate with little or no human oversight, and is monitoring false narratives produced by AI-based tools.”
The sites in question operate in 15 languages: Arabic, Chinese, Czech, Korean, French, Indonesian, English, Italian, Dutch, Portuguese, Spanish, Tagalog, German, Thai and Turkish.
Let’s find out more about the report.
AI-generated news sites: the four criteria
The 725 sites identified by NewsGuard, which publish news generated by artificial intelligence, meet four requirements.
Meanwhile, there is irrefutable evidence that a substantial portion of site content is produced by AI.
Then, “there is clear evidence that content is published without significant human oversight. For example, numerous articles contain error messages or other formulas typical of chatbot responses, thus demonstrating that the content was produced by AI tools without adequate control.”
Furthermore, the external appearance of the sites can lead the average user to believe that the contents are produced by a human editorial team.
Finally, the sites in question do not openly reveal that the news reported is the work of artificial intelligence.
How these sites present themselves, and how they make money
NewsGuard delves into the third of the three criteria, why AI-generated news sites look deceptive. “They typically have generic names – such as iBusiness Day, Ireland Top News and Daily Time Update – which to the lay reader might appear to be those of legitimate news sites. Yet, rather than offering traditionally created and curated journalistic content, these sources operate with little or no human oversight, and publish articles written largely or entirely by bots.”
These are sites that have created hundreds of articles on the most varied topics, often containing clearly false statements.
We then read that in many cases the income of these sites is based on programmatic advertising. And it is surprising how often advertisers do not ensure the nature or quality of the site.
NewsGuard rightly says: “Until brands take steps to exclude untrustworthy sources from the list of those authorized to publish their ads, their ads will continue to appear on these types of sites, creating an economic incentive for their widespread development stairs.”
Artificial intelligence sites and hoaxes
It is clear that the greatest risk in almost completely entrusting the production and dissemination of news to artificial intelligence is that of multiplying the spread of fake news.
Here NewsGuard cites as an example some false news generated by AI. Let’s look at three.
Three examples of fake news generated by AI
A site run by the Chinese government cites AI-generated information as a reliable source to support the false theory that the United States operates a bioweapons laboratory in Kazakhstan. According to this theory, camels would be infected in the laboratory to harm the Chinese population.
The case of is then cited an AI-generated site that spread the news of the suicide of Netanyahu’s alleged psychiatrist. It’s a shame that this is a reworking (in a serious tone) of a satirical article published as far back as 2010.
Finally, already in the last months of 2023 NewsGuard had analyzed 17 TikTok accounts that created conspiracy videos using AI-powered text-to-speech tools.
Leave a Reply
View Comments