skip to Main Content

Triumph of the simulacra – How deepfakes aim to rule our minds

Deepfakes are famous for their fake pornography and YouTube videos with dancing politicians. But how else could they challenge our society?

90s threat

According to Antispoofing Wiki, deepfake technology dates back to 1997 when the first digital face manipulation tool Video Rewrite was introduced by Interval Research Corporation. Oddly enough, the first deepfake in history was political – it made JFK lip-synch with a “I never met Forrest Gump” phrasing.

In 2017, deepfake videos became a common threat as their production tools became widely available to common users. Someone under the nickname “deepfake” posted some porn videos on Reddit. In them, the faces of a few Hollywood actresses – Gal Gadot being one of them – were glued to the real divas of the adult genre with the black magic of generative deep learning. This is how the era of deepfake began.

Currently, deepfakes are considered the most serious threat from AI and machine learning technologies. Crime Science reports that deepfakes are capable of producing devastating societal harm: from thin and fake political news to petty thefts of money to realistic imitations.

The study also mentions that the proliferation of deepfake technologies is simple to orchestrate: it can be quickly shared, sold and copied by perpetrators. (Unlike physical crime tools like firearms – these require covert logistics.)

So why are deepfakes so dangerous?

Destructive qualities of deepfakes

Tampered media can cause unpredictable results. For example, deepfake allegations nearly sparked an upheaval in Gabon. Senior military brass have accused the presidential administration of using a synthetic video of the country’s leader Bongo Ondimba who supposedly died of a heart attack earlier in 2019.

Apparently, to avoid losing power, corrupt officials quickly concocted a deepfake New Year’s address that would appease the wary public and help buy some time.

Image: Bongo’s alleged use of Botox and poor health has sparked false speculation among his rivals

The fabricated rumors were used by the National Guardsmen as a pretext to seize the central radio station – they begged the citizens to stop whatever they were doing and flood the streets with justified anger. However, the coup failed.

Audio deepfakes appear to be an equally serious threat. In the United Arab Emirates, a massive robbery was orchestrated using a voice cloning tool. The fraudsters imitated the voice of a company director and managed to request a transfer of $35 million from a bank in Hong Kong.

The pressing issue of deepfakes has sparked regional and international concern. For example, the European Parliament published a study called Tackling Deepfakes in European Policy. The document lists among all the other categories of risks brought by technology: intimidation, extortion, identity theft, electoral and stock market manipulation, etc.

However, one of the most destructive properties of deepfakes is the liar’s dividend and truth apathy. While some are paranoid that one day they will be targeted by the heinous technology and endangered beyond belief, others can rejoice. Deepfakes will finally allow them to refute any compromising material.

The liar’s dividend can have a frightening impact on our society. Paradigm don’t believe what you see may actually help some unscrupulous politicians and public figures out of a scandal.

Even though the legitimacy of a video or sound can be confirmed by technical means – such as double compression analysis – regular observers are often suspicious of the verdict of experts. It’s always easy to throw out something you don’t really understand.

If the liar’s dividend is Phobosthen the apathy of reality is Deimos in this duet. Unable to trust their own senses, people may overlook truly important materials. Until there is a reliable, trustworthy, and universally available way to distinguish fake from bona fide media, deception will prevail over common sense.

A challenger appears

Deepfake is not alone: ​​he has a brother called “cheapfake”. Cheapfakes are a type of fake media that is easy, cheap, and quick to produce. Scammers don’t even need to use neural networks to make them.

They can produce cheap counterfeits in gargantuan quantities with simple editing tools: Movie Maker, Adobe Premiere/Audition and of course Photoshop. The famous “drunken Pelosi prank” is a bogus textbook. It was produced by simply slowing down the speed of the original video, making the target appear inebriated.

Yes, cheapfakes wouldn’t get their nickname for nothing. They are indeed cheap. And pretty easy to spot too, like in the Pelosi prank case. However, in some areas where the technological culture leaves much to be desired, cheap counterfeits can lead to tragic events.

In 2018, a series of cheap fakes started circulating in Indian WhatsApp group chats. It showed bikers “kidnapping” children for organ harvesting. It was accompanied by some truly gruesome images of dead children “killed by the reapers”.

This quickly sowed panic and paranoia in villages in Karnataka, Maharashtra and other Indian states. Villagers gathered in lynching mobs and attacked strangers, tourists and random bikers – at least 20 random people were killed in light of this hoax.

In reality, the cheapfake used a recontextualization technique that presented irrelevant images in a completely different light. For example, images of dead children were captured years before the hysteria to document war casualties among children. As for the “kidnappers on bikes”, it was simply a clip removed from a social advertisement that warned parents of the ease with which it was possible to kidnap a child.

Countermeasures

Experts point to lack of awareness and technological illiteracy as the two main factors that triggered the mass lynching in India. Another essential factor is that social media and messaging apps are ideal channels for the proliferation of deepfakes and cheapfakes. This makes them similar to a viral disease.

Currently, there are only a handful of methods to neutralize fake media. First, the researchers recommend paying attention to visual cues: unnatural alignment of facial features, odd skin tone, posture, gestures, mismatches between lip movements and voice. Additionally, artifacts (such as distortion or blurring) can be spotted in areas where one part of the body morphs into another: neck, elbows, wrists, etc.

Second, mention should be made of the Content Authenticity Initiative (CAI) offered by Adobe. This initiative aims to set standards, as well as introduce a universal platform that will protect original media content from malicious tampering. This is achieved by inserting indelible metadata – the special data that reveals who, where and when produced the content.

But of course, these countermeasures won’t work solo. They need strong support from educators around the world. Start in schools and end in communities living in less developed regions. Ignorance is fertile ground for many negative phenomena. And deepfakes are one of them.

Back To Top