Skip to main content
European Commission logo
IP Helpdesk
  • News blog
  • 28 August 2024
  • European Innovation Council and SMEs Executive Agency
  • 5 min read

Deepfake- A Global Crisis

Today, artificial intelligence has become the new normal. With the development of AI technology new challenges have emerged; deepfake being one of the most notorious. 

Deepfakes are synthetic or doctored media that have been digitally manipulated and altered to convincingly misrepresent or impersonate someone using artificial intelligence or AI[1]. Creating deepfake videos involves sophisticated AI technology but there are many types of software.

Deep fakes have serious implications for data security, privacy, and intellectual property rights. When deepfake technology is misused, it can lead to privacy violations, personal data breaches, and the unauthorised use of people's likenesses for profit or commercial purposes.

Threats of Deepfake

Deepfakes have emerged as a powerful tool for spreading misinformation and disinformation, posing serious challenges to society. Deepfakes blur the line between reality and fiction by creating extremely convincing fake media. Deepfakes enable the creation of compelling narratives that shape public opinion. Malicious actors are using deepfakes to portray people saying or doing things they did not actually do. Deepfakes are rapidly spreading because they are sensational, captivating audiences, and generating significant attention[2].

The rapid spread of deepfakes on social media is making it difficult to control the spread of false information. Deepfakes are being specifically designed to exploit vulnerabilities in individuals or organisations. The increasing sophistication of deepfake technology is posing challenges for detection and debunking. As deepfakes are becoming more realistic, distinguishing between genuine and manipulated content is becoming more difficult[3].

In the field of medicine and finance, serious concerns of fake medical records, alteration of existing medical records, identity theft, fake KYC, etc. doom the day[4].

Scenario in India

In a survey conducted by cybersecurity firm McAfee, it is revealed that more than 75% of Indians present online and surveyed have seen some form of deep fake content, while at least 38% of the individuals surveyed have encountered a deep fake scam during this time[5]. Approximately 47% of Indian adults have experienced or know of someone who has experienced an AI voice scam [6]. According to McAfee's report on AI Voice Scams, approximately 83% of Indian victims reported monetary losses, with 48% losing more than INR 50,000[7].

The Legal Regime 

India does not yet have legislation specifically addressing deepfakes and the current legislation regarding cyber offenses caused using deepfakes is not adequate to fully address the issue[8]; however, the Ministry of Electronics and Information Technology (MeitY) is developing appropriate legislation to prevent the misuse of deepfakes[9].

MeitY has issued an advisory[10] to social media intermediaries to regulate deepfake[11]. The advisory mandates that intermediaries communicate prohibited content clearly and precisely to users through its terms of service and user agreements and the same must be expressly informed to the user at the time of first registration and also as regular reminders [12].

The advisory emphasises the importance of digital intermediaries informing users about penal provisions. The social intermediaries must make reasonable efforts to prevent users from hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating, or sharing any information about prohibited content. This rule is intended to ensure that platforms identify and promptly remove misinformation, false or misleading content, and material impersonating others, including deepfakes[13].

Another advisory[14] issued by MeitY to social media intermediaries which mandates that due diligence is exercised and reasonable efforts are made to deepfakes and any such content when reported within 36 hours of such reporting must be removed. 

Remedies are available under the Information Technology Act, 2000, Information Technology Rules (Intermediary Guidelines and Digital Media Ethics) Code, 2021, and the Bharatiya Nyaya Sanhita, 2023, and the same can be availed to address deep fake related offences[15]

The Copyright Act, 1957, also provides for penal provisions for various offences including copyright infringement[16]. It expressly forbids using someone else’s property without permission, especially if that person has exclusive rights to it.

Deep fakes breach personal data and violates the right of privacy of an individual. Since pictures and images are sensitive personal data of an individual that are capable of identifying that very individual, protection can also be availed under the relevant provisions of the Digital Personal Data Protection Act, 2023.

Twenty-nine countries, including India, as well as the European Union in the Bletchly Declaration[17] have joined forces to prevent 'catastrophic harm, either deliberate or unintentional' caused by the growing use of artificial intelligence. The Declaration establishes a step forward for countries and nations to cooperate and collaborate on existing and potential AI risks, as well as an agenda for identifying risks in the AI arena and developing risk-based policies across countries to increase transparency by private players developing frontier AI capabilities[18].

India also chaired the Global Partnership on Artificial Intelligence (GPAI) in 2024 adopted the ‘New Delhi Declaration’[19]

The Indian judiciary has also stepped up to prevent misuse of deepfake. Only recently, the Hon’ble Delhi High Court granted protection to actor’s individual persona, and personal attributes against misuse, specifically through AI tools for creating deepfakes[20]

Conclusion

Deepfakes are a global issue. Effectively regulating their use and preventing privacy violations will likely require international cooperation and collaboration. 

Poor-quality deepfakes are easier to spot. Unnatural face, environment or lighting, unnatural behaviour, image artifacts blurriness or flickering around the edges of transposed faces can give away deepfake. However, deepfakes are getting better and thus more obscure to naked eye. However, technology beats technology; ‘Blockchain’ aids combating deepfakes. There are several deepfake detection tools[21] available to the public. 
 

[1] https://www.khuranaandkhurana.com/2024/04/19/the-use-of-deep-fake/#_ftnref2

[2] Impact of Deepfake Technology on Social Media: Detection, Misinformation and Societal Implications, The Eurasia Proceedings of Science, Technology, Engineering & Mathematics (EPSTEM), 2023 Volume 23, Pages 429-441, http://www.epstem.net/tr/download/article-file/3456697

[3] ibid

[4] ibid

[5] https://economictimes.indiatimes.com/tech/technology/75-indians-have-viewed-some-deepfake-content-in-last-12-months-says-mcafee-survey/articleshow/109599811.cms?from=mdr

[6] https://www.livelaw.in/law-firms/law-firm-articles-/deepfakes-personal-data-artificial-intelligence-machine-learning-ministry-of-electronics-and-information-technology-information-technology-act-242916#:~:text=2023%20LiveLaw%20(Del)%20857%20%E2%86%91%202022%20SCC%20OnLine%20Del%204110.%20%E2%86%91

[7] ibid

[8] https://www.scconline.com/blog/post/2023/03/17/emerging-technologies-and-law-legal-status-of-tackling-crimes-relating-to-deepfakes-in-india/

[9] https://indiaai.gov.in/news/center-to-introduce-new-regulations-to-tackle-issues-of-deep-fakes

[10] The advisory to social media intermediaries to identify misinformation and deep fake was released by the Union Government on 7th Nov 2023

[11] https://pib.gov.in/PressReleseDetailm.aspx?PRID=1990542&ref=static.internetfreedom.in.

[12] Particularly those specified under Rule 3(1)(b) of the IT Rules, 2021

[13] https://pib.gov.in/PressReleseDetailm.aspx?PRID=1990542&ref=static.internetfreedom.in.

[14] https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1975445

[15] Refer Rule 7 of the Information Technology Rules (Intermediary Guidelines and Digital Media Ethics) Code, 2021; Section 79(1), 66E, 67, 67A, 67B of the IT Act, Section 66E of the IT Act, Section 79, 192, 336, 356 of the Bharatiya Nyaya Sanhita, 2023.

[16] Refer Section 51 of The Copyright Act, 1957.

[17] https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023

[18] https://www.livelaw.in/law-firms/law-firm-articles-/deepfakes-personal-data-artificial-intelligence-machine-learning-ministry-of-electronics-and-information-technology-information-technology-act-242916#:~:text=2023%20LiveLaw%20(Del)%20857%20%E2%86%91%202022%20SCC%20OnLine%20Del%204110.%20%E2%86%91

[19] https://gpai.ai/ 

[20] Refer 2023 LiveLaw (Del) 857 and 2022 SCC OnLine Del 4110 

[21] https://builtin.com/artificial-intelligence/ai-detection-tool

Details

Publication date
28 August 2024
Author
European Innovation Council and SMEs Executive Agency