Deepfakes Can Have Serious Effects On The Second Half Of 2020

Dhir Acharya - May 12, 2020


Deepfakes Can Have Serious Effects On The Second Half Of 2020

Deepfake, the term referring to an imaging technique using AI, is very scary. It can alternate the outcome of 2020's second half. But how?

Deepfakes, the term referring to an imaging technique using Artificial Intelligence, is very scary. It can ruin an entire election even when the candidates’ faces are not manipulated with it.

One of the easiest ways to do that is by creating fake and false videos of candidates. However, this is still not the most concerning thing to experts.

Capture

The biggest concern

According to Kathryn Harrison, the founder of Deep Trust Alliance that researches and fights deepfakes, the most concerning problem is that we cannot tell if a video is real or fake.

Research on deepfakes has the term “Liar’s Dividend,” which refers to the impacts on belief due to the appearance of deepfakes. Basically, it will make people doubt their own eyes and tend to assume every video is fake.

Manipulating images is not new, but deepfakes is something different. Fake videos can make people look like they are saying or doing things they have never ever done.

deepfakes
Examples of manipulated images

Texas University’s Law Professor Bobby Chesney, who came up with the term Liar’s Dividend, said that the US Democracy held conferences and educated people about the threats of deepfakes.

Nevertheless, those educating efforts could also help it spread wider as those who are new to this term may totally believe a real video is fake.

A technology created to lure viewers

The reason deepfakes can trick people is the technology used to create it, GAN (General Adversarial Networks). While AI appeared decades ago, GAN has just been developed for about 6 years.

Essentially, GAN is a process of error-trial-error that repeats constantly to trick the human eye. It consists of a generator and a discriminator, both of which are neural networks, allowing them to create millions of samples in the blink of an eye.

GAN
GAN is a constant trial-error process until it generates an image that can trick the human eye

GAN is a system designed to trick you, it can generate images, voices, videos, or any type of media. The term deepfakes is most commonly used for videos. Regarding this, Chesney said with true deepfakes, you will certainly never find out.

The creation of such things led to new terms: Shallow fakes and Cheapfakes. These types of fake videos are easy to identify, but still really effective, making them a major risk of misleading information.

When more and more people are easily believing in information shared on Facebook or YouTube, these simple videos are even more dangerous. Fake, shocking videos can be shared quickly, causing more problems than official information on such platforms.

Consequently, there will be a network of fake videos people trust while they doubt real, boring info.

It’s costly to make convincing deepfakes videos, mostly created and shared in the academic sector. However, if they are released at the right time, like a few days before an election, they can have a huge influence.

Researcher Clint Watts of the Foreign Policy Research Institute said that if a deepfakes video surfaces 48 hours before Election Day, there will be no time to fix the consequence.

>>> Companies And Stores Are Making Use Of AI To Track Social Distancing

Comments

Sort by Newest | Popular

Next Story