
A video of South Indian actress Rashmika Mandanna went viral recently, in which her face was superimposed on another person’s body. The actress was trolled and shocked by the video, which she clarified was not hers. This video was created using deepfake technology, a form of artificial intelligence that can manipulate media to make it look real. This technology has raised concerns about its potential misuse and impact on society. We will explain what deepfake technology is, how it works, how to spot it, and what are the legal implications in India.
What is deepfake technology?
Deepfake technology is a combination of deep learning and fake, which refers to the use of AI techniques to alter or generate media, such as images, videos, or audio. Deepfake technology can replace a person’s face or voice with another’s, making it seem like they have done or said things they never did. Deepfake technology can also create entirely new content, such as synthetic faces or voices, that do not belong to any real person.
How does deepfake technology work?
Deepfake technology works by using two types of neural networks, which are computer systems that can learn from data and perform tasks. One type is called an encoder, which analyzes the source content, such as the original face or voice, and extracts the essential features and representations. The other type is called a decoder, which uses the features from the encoder to generate new content, such as a fake face or voice. This process is repeated for each frame of the video or each segment of the audio to ensure consistency and realism.
Deepfake technology also uses a framework called a Generative Adversarial Network (GAN), which consists of two competing neural networks: a generator and a discriminator. The generator tries to create fake content that can fool the discriminator, while the discriminator tries to distinguish between fake and real content. The generator and the discriminator learn from each other and improve their performance over time, resulting in more convincing and high-quality deepfakes.
How did deepfake technology start?
The term deepfake was first coined in late 2017 by a Reddit user who used deep learning technology to superimpose celebrities’ faces onto pornographic videos. This incident attracted significant attention and controversy, as it violated the privacy and consent of the celebrities involved. By 2018, the technology had become easier to use, thanks to open-source libraries and tutorials shared online. In the later 2020s, deepfakes became more accessible and difficult to detect, as they were used for various purposes, such as entertainment, education, art, politics, and crime.
How to spot deepfake content?
It is not easy to spot deepfake content, but it is not impossible either. To identify them, one has to pay attention to visual and audio cues, as well as other indicators, such as:
- Facial expressions and anomalies: Look for unnatural facial expressions, mismatched lip-sync, irregular blinking, or blurred edges. Deepfake technology may not be able to capture the subtle nuances and emotions of human faces or may produce artifacts and glitches in the output.
- Audio discrepancies: Listen carefully for shifts in tone, pitch, or unnatural speech patterns. Deepfake technology may not be able to replicate the exact voice or accent of a person or may produce inconsistencies and distortions in the sound.
- Body style and color: Look for mismatched body styles or colors between the face and the rest of the body. Deepfake technology may not be able to adjust the face to fit the body shape or skin tone of the person or may produce noticeable differences and contrasts in the output.
- Location and lighting: Look for incongruous locations or lighting between the background and the foreground. Deepfake technology may not be able to match the lighting or perspective of the scene or may produce unrealistic or unnatural effects in the output.
- Source and context: Look for the source and context of the content, such as the date, time, location, or purpose of the video or audio. Deepfake technology may not be able to provide verifiable or credible information about the content or may produce content that is out of place or inconsistent with reality.

What are the legal implications of deepfake technology in India?
If someone makes and shares deepfake videos of someone as a joke or for fun, they may face legal action under the Indian Penal Code (IPC) sections. They may also have to pay a heavy fine or face imprisonment. Apart from this, if someone’s reputation or dignity is harmed by deepfake videos, they may also file a case of defamation against the creator or the sharer of the content. In this case, action may also be taken against social media companies under the Information Technology (IT) rules. After the complaint, social media companies have to remove such content from their platforms within 36 hours. However, India does not have a specific law or regulation to deal with deepfake technology, which poses a challenge for the authorities and the victims. Therefore, there is a need for better awareness, detection, and prevention of deepfake technology, as well as ethical and responsible use of AI.