What are deepfakes, and how is it possible to spot them?

Article by LABC

As it is widely known, nowadays computers have been getting increasingly better at simulating reality. In this field, deepfake technology is recently becoming more and more popular, especially through and thanks to social media platforms. Tik Tok, to give an example, is recently hosting a steady stream of deepfake videos. But what it is all about?

The word “deepfake” combines the terms “deep learning” and “fake,” and it is a form of artificial intelligence, in which a subject’s face or body have been digitally altered to make them look like someone else – usually a famous person. Deep learning algorithms, which teach themselves how to solve problems when given large sets of data, are in this case used to swap faces in video and digital content, to make realistic-looking fake media. There are several methods for creating deepfakes, but the most common relies on the use of deep neural networks involving autoencoders that employ a face-swapping technique. While the technology needed to make deepfakes is sophisticated, it’s becoming increasingly accessible, and without a specific regulation.  Several apps and softwares make generating deepfakes easy even for beginners, and a large amount of deepfake softwares can be easily accessed as well.  Many of these existing apps are used for pure entertainment purposes — which is why deepfake creation isn’t outlawed — while others are far more likely to be used maliciously.

Indeed, beyond the uses related to creativity and fun, deepfakes are increasingly deployed in disinformation campaigns, for identity fraud and to discredit public figures and celebrities. In addition, deepfake technology is thought to present several social problems such as the possibility of being used as “proof” for other fake news and disinformation, to discredit celebrities and others whose livelihood depends on sharing content while maintaining a reputation, and the difficulties providing verifiable footage for political communication, health messaging and electoral campaigns. Deepfake videos have also been used in politics. In 2018, for example, a Belgian political party released a video of Donald Trump giving a speech calling on Belgium to withdraw from the Paris climate agreement. Trump never gave that speech: it was a deepfake, and it was not the first use of this technology to produce misleading videos.

For all these reasons, many experts believe that in the future, deepfakes will become far more sophisticated as technology further develops and might introduce more serious threats to the public, relating to election interference, political tension, and additional criminal activity.

Unfortunately, there are not specific strategies to detect deepfakes. One of the best remedies for users against harmful deepfakes is to pay attention and be aware. Typically, the first sign of a deepfake is that something seems to be “off”. If so, it is worth to look more closely at the subject’s face and ask ourselves if any detail seems disjointed, forced or otherwise unnatural. Evaluating the whole context is also important: asking ourselves questions about what the figure is saying or doing, search online for keywords about the video, verifying the reliability of the source, and if users of a social media platform, searching if the author’s account is verified.

Many of the above suggested behaviours are those related to media and digital literacy skills, and require exercising good judgment. Where common sense fails, there are some more in-depth ways to try to spot deepfakes. For example, as suggested by experts in the field, it can be worth to search for keywords used in the video to see if there’s a public transcript of what has been said or take a screenshot of the video playing and do a Google reverse image search: this last strategy can reveal whether an original version of the video exists, helping to make a comparison with the dubious one.

Finally, in case of the effectively recognition of a deepfake, it is important not to keep it to ourselves: it is always worth to share it and report it.