ARTICLE
22 February 2022

DEEPFAKES – A Threat To Facial Recognition Technology

Ka
Khurana and Khurana

Contributor

K&K is among leading IP and Commercial Law Practices in India with rankings and recommendations from Legal500, IAM, Chambers & Partners, AsiaIP, Acquisition-INTL, Corp-INTL, and Managing IP. K&K represents numerous entities through its 9 offices across India and over 160 professionals for varied IP, Corporate, Commercial, and Media/Entertainment Matters.
Deepfakes have been notably manipulating videography and photography techniques, and made a name for themselves in the dynamic tech realm of the internet.
India Technology
To print this article, all you need is to be registered or login on Mondaq.com.

INTRODUCTION

Deepfakes have been notably manipulating videography and photography techniques, and made a name for themselves in the dynamic tech realm of the internet. In simple terms, deepfake technology can flawlessly merge anyone from around the world into a video or photo in which they never took part. Such technology has existed for decades – that's how the late actor Paul Walker was resurrected for Fast & Furious 7! Previously, it used to take an array of experts to seamlessly use this technology and present it to the audience; however, with the new machine learning systems it has become simpler and cheaper to morph videos and images. Some of the most notable examples are when a video of Meta CEO Mark Zuckerberg went around the internet where he was claiming to control stolen data of its billions of users. Deepfakes also marked its first presence in the Indian elections in February 2020 where a video of Delhi's BJP president criticizing his opponent surfaced on the internet.

DEEPFAKE SURPASSING FACIAL RECOGNITION TECHNOLOGY

The deepfake technology has been used for notorious pranks, committing online fraud, influencing the general public viewpoint, and embarrassing political officials. However, this does not stop here. This technology also poses an evident threat to biometric facial recognition technology by utilizing the Generative Adversarial Network (GAN). A Generative Adversarial Network (GAN) is an analytical technology that can produce false positives and false negatives through fake videos and pictures. Deepfakes are the latest manifestations of GAN, which create exceptionally well counterfeit pictures and videos which are quite hard to differentiate from the originals.

Let's take the example of an organization that authenticates its members by identification proofs such as Aadhar cards, driver licenses, etc. and doesn't have to confirm their real existence. Deepfakes can be applied here, easily bypassing these biometrics and getting access to personal data. Similarly, even for biometric recognition such as facial recognition technology (FRT), deepfakes can easily surpass them.

To further corroborate the same, researchers from Sungkyunkwan University in Suwon, South Korea published a study wherein they demonstrate that APIs from Microsoft and Amazon can be fooled with commonly used deepfake–generating methods (for instance, GAN). The researchers, from various experiments, found that some deepfake generation methods were of greater threat to recognition systems and that every system would react differently to deepfake impersonation attacks. They benchmarked facial recognition APIs from Microsoft and Amazon since they offer services to recognize celebrity faces, making it relatively easier for the researchers to generate deepfakes. Thereafter the researchers found that all the APIs were susceptible to being fooled by deepfakes.

The above research has successfully thrown light on the need to have an appropriate defense mechanism to fight against malicious use of deepfakes.

LEGAL IMPEDIMENTS

Apart from the technology sector, legal systems across the world are also trying their best to keep themselves apprised with the dynamic technology of deepfake. The US is one of the first countries to pass the Deepfakes Accountability Act in 2019, requiring anyone who makes deepfakes imitating a person to disclose the same by adding a watermark to their imitation. Failing to do the same would be a crime.

In India, there is no particular law criminalizing deepfakes. Sections 67 and 67A of the Information Technology Act, 2000 punishes sexually explicit material in explicit form. Section 500 of the Indian Penal Code, 1860 provides punishment for defamation. Further, The Personal Data Protection Bill 2019 provides for the protection of personal and non-personal data, including data relating to a natural person who is directly or indirectly identifiable. The bill regulates the processing of data except for a lawful purpose. Apart from laying down penalties in the case of contravention of its provisions, the bill also has extraterritorial applicability for creators of deepfakes outside of India. Therefore this bill, once passed, would play a significant role in regulating the usage and circulation of deepfakes.

CONCLUSION

With facial recognition becoming a more integrated part of our lives, from unlocking our phones to biometric security measures, it is quite evident that there needs to be a mechanism that could effectively identify deepfakes and prevent them from creating further security challenges.

Having a strong legislative backing is a must, but apart from that detecting and controlling deepfakes should be a major goal. Governments and other regulatory bodies should take steps to ensure the authenticity of videos that circulate. For instance, Microsoft with Amazon and Meta initiated a Deepfake Detection Challenge where research teams from around the world took part in the deepfake detection game. Later on, Microsoft proceeded to unveil a deepfake detection tool with the hopes to combat disinformation. Deepfakes can also be tackled by holding the Internet Service Providers (ISPs) liable for the transmission of such content. For example, section 79 of the Information Technology Act 2000 limits the liability of ISPs; it states that the ISPs will have no liability if they have undertaken all due diligence, without particularly defining what it means by "due diligence". Another potential alternative to deepfake detection could be the use of blockchain technology. Blockchains maintain data blocks on a decentralized network where anyone may validate the information's authenticity by comparing it to a unique non-invertible key. A mismatch will emerge from even the tiniest alteration of the data.

Every new deepfake technology, according to research, is fixed by a superior one, which solves every conceivable problem in the preceding one. As a result, it is critical that government officials double-check the authenticity of every piece of content that is circulated online, as well as invest more in technologies that can detect deepfakes. The most damaging effect that deepfakes have on society is that they make actual information seem implausible. Many believe that true and authentic content is a way to combat deepfakes. Lastly, people should be made aware of deepfakes and the potential harm they pose; awareness should be actively raised, and individuals must exercise extreme caution and responsibility in protecting their personal information.

STATUTES

  • Deepfake Accountability Act (2019-2020), H.R. 3230, 116th Congress.
  • Information Technology Act, 2000, No. 21, Acts of Parliament, 2000 (India).
  • Indian Penal Code 1860, No. 45, Acts of Parliament, 1860 (India).

DEEPFAKES – A Threat To Facial Recognition Technology

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More