The deepfake is the perfect demonstration of how technology reaches ever more exciting destinations. And that they can be subjected to perverse uses more and more easily.
The deepfake is a technique for human image synthesis based on artificial intelligence, which allows you to create videos and facial and body images by exploiting other pre-existing ones.
The uses can be varied: beyond the parodic ones, let’s think – we wrote about it in an article – about how dubbed films could appear more likely.
But that’s not all: staying within the cinematic sphere, you can make big stars act without actually having them on the set, as happened to Bruce Willis in the commercial for a Russian mobile phone company.
And here arise, or could arise, non-trivial ethical problems.
The risks of deepfakes
Yes, because, if it is true that Bruce Willis accepted that his features were used for the already mentioned advertisement, elsewhere the actors’ consent was not asked for the use of their images.
In the Star Wars saga, for example, the late Carrie Fisher was made to “play”.
But that’s not all: advertisements with unauthorized deepfakes of characters such as Elon Musk, Leonardo DiCaprio and Tom Cruise are teeming. The risk is obvious, since with the deepfake it is possible to make anyone pronounce any sentence.
In this sense, the fake Volodymyr Zelensky is famous, who last spring – in a video that was later removed – invited the Ukrainian troops to surrender.
In short: with deepfakes, the danger, as with fake news, is that the already blurred line between false and true becomes even more blurred.
And as this technology gets better and better, the question is: how to recognize the deepfake?
An answer seems to come from Intel, which has just presented the FakeCatcher technology: let’s find out what it is.
FakeCatcher, the technology to recognize the deepfake
The US company Intel has announced on its official website that it has invented a technology capable of recognizing a deepfake in real time. And to do it with 96% accuracy.
The technology will be called FakeCatcher. Let’s find out what it is and how it works.
How FakeCatcher works
An Intel working group led by Ilke Demir developed FakeCatcher together with a pool of Binghamton University in New York led by Umur Ciftci. The technology is based on machine learning, and works by scrutinizing faces and finding any inconsistencies.
But how does it manage, with an error rate of just 4%, to recognize the deepfake? Taking advantage of a feature that is almost imperceptible to the naked eye: blood flow.
Recognize the deepfake from the bloodstream
The blood flow in a person’s face, in other words when the heart pumps blood, creates color changes in the veins, which also show up in the pixels of a video.
FakeCatcher, in fact, captures blood flow signals from 32 points on the face, and its algorithms translate them into space-time maps. And it does so by exploiting the PPG optical measurement technique (photoplethysmography), already known since the 1970s, which measures the amount of light absorbed or reflected by blood vessels in living tissue.
Then, thanks to deep learning, FakeCatcher knows how to decide in real time whether a particular person’s video is authentic or not.
Recognizing a deepfake, for this technology, therefore means finding inconsistent data on blood flow.
Virtues and limits of this technology
It goes without saying how useful such technology could be for recognizing deepfakes. Waiting, of course, for a regulation in this regard to be drawn up soon.
Because counterfeiting the statements of more or less well-known personalities can give rise to diplomatic incidents, revenge porn, retaliation and even political tensions.
We recall, to mention a similar technology, that our Guarantor recently opened an investigation into FakeYou, the app that allows you to create messages by imitating the voices of famous people.
How to make sure, for example, that a low-definition deepfake does not escape FakeCatcher?
Ilke Medir reassured in this sense, saying that even with poor video quality, FakeCatcher’s accuracy rate does not drop below 91%.
But what will happen when deepfake technology is more advanced? Will the tools for recognizing deepfakes also be able to evolve at the same speed?
It almost seems to be faced with the contemporary version of the now classic virus-antivirus antinomy.