Deepfake

A deepfake is a digitally manipulated form of media that uses artificial intelligence (AI) to replace a person’s face or body with digitally altered elements to make them look like someone else. This convincing tactic is often used with malicious intent to deceive people and spread misinformation. 

The term was invented in 2017 by a Reddit user named “deepfakes.” This is where the deepfake trend started. However, Reddit has since banned its “deepfakes” community, which had nearly 100,000 members and was known for creating AI-generated face-swapped celebrity video clips. The “deepfake” term now includes any AI-generated videos, pictures, or audio designed to appear real, such as realistic images of nonexistent people or celebrities.

Frequently asked questions

1

How Do Deepfakes Work?

Arrow

Deepfakes use AI and machine learning (ML), including advanced techniques such as facial recognition algorithms and neural networks, which consist of variational autoencoders (VAEs) and generative adversarial networks (GANs).

Neural networks feed machine learning with a structure of interconnected nodes that work similarly to neurons in the human brain. That’s because they are designed to minimize errors. In the context of deepfakes, neural networks reduce the difference between the fake image and the real images. This process involves repeatedly adjusting the model’s weights until the output achieves the desired level of accuracy and the deepfake looks legitimate.

2

What are Some Popular Deepfake Examples?

Arrow
3

How to Identify a Deepfake?

Arrow
4

Are Deepfakes Illegal?

Arrow
5

Why is Deepfake Technology Harmful?

Arrow
6

Are there Any Companies that Use Deepfakes?

Arrow

Save costs by onboarding more verified users

Join hundreds of businesses that successfully integrated iDenfy in their processes and saved money on failed verifications.

X