Actor Jackie Shroff recently obtained a Delhi High Court order safeguarding his personality rights against Artificial Intelligence (AI) chatbots, social media platforms, etc. The rising concern over 'deep fake' content has already attracted various regulatory bodies to promulgate newer laws, and compliance measures to curb the misuse of the generative AI. Voice deepfakes, a sub-set of the generative AI, have equipped scammers with newer ways to exploit individual rights and manipulate consumers.
An Open AI product sparked controversy when the AI assistant's voice resembled the voice of Scarlett Johansson – despite the actress not lending her voice to train the AI program. This raised major concerns about unauthorized data collection and its misuse. Interestingly, not only do such AI voices replicate the subject's voice and articulation but the tool can also generate new content in different languages – even a language in which the subject is not conversant.
Legal Challenges concerning AI-generated voices
Copyright Infringement
Voice deepfake content infringes copyright, which is a cause of concern for singers/voice-over artists who may face challenges in legally enforcing their rights given that a song is granted copyright as a whole and the ownership of a song vests with producers.
In the case of an AI-generated novel, the US Copyright Office only granted ownership over works produced by human authorship and denied granting copyright over part of the content produced by AI.
Personality Rights
Although the jurisprudence is positively developing, the concept of personality rights lacks specific legislation. Moreover, while celebrities can seek protection under personality rights, a non-public figure may face challenges. Since obtaining protection to prevent pre-emptive voice cloning can be difficult if the voice is not a well-identified voice.
The problem becomes exceedingly challenging when an AI tool uses a deceased artist's voice to create content. In one of the instances revolving around a deceased's personality right, the Delhi High Court opined that personality rights are not heritable, and such rights expire with the death of the individual.1 Therefore, even if the intention of a platform is to produce creative content, the vesting of the right (under succession laws) maybe questioned.
Privacy/Reputation Concerns
Apart from commercial and creative damage, voice deepfakes raise privacy and reputational concerns as well.
Recently, a Berkley-based AI start-up allegedly stole a voice actor's voice to train its AI software – under the pretence of using the voice for research purposes.2 However, it was later discovered that the voice was cloned and used for commercial purposes.
Fraudulent Practices
There has been a rise in incidents of cybercrimes such as fraud and extortion in India where scammers place phone calls to individuals by voice cloning the victim's known contacts, and duping them into transferring money.3 Moreover, AI-platform complexity hinders cybersecurity experts to trace deepfake perpetrators.
Current Framework: India and Worldwide
Owing to the increase of incidents, the Ministry of Electronics & IT (MeitY) has issued advisories to intermediaries to exercise due diligence in identifying and taking down deep fake contents.4 However, India is yet to formulate a specific legislation dealing with AI.
Expanding the scope of publicity rights by including voice to be protected from unauthorized use by AI tool is of pressing need. Notably, under Tennessee laws one can be held liable if he has made the unauthorized content publicly available or if an entity develops a software that produces such unauthorized content containing an individual's voice.5
The Federal Trade Commission (FTC) has sued various service providers selling such technologies without taking adequate preventive measures from deceiving consumers.6
Conclusion: Way Forward
The social media boom fuelled the availability of personal data on the internet, aiding the creation of accurate deepfake content. Given the unrestrained/unregulated access to the technology and the companies producing it, personality rights are at risk for not only prominent figures but also for anyone with an online presence.
While adopting a legislation whereby both entities and their creators are faced with legal ramifications would be one of the ways to prevent the implications, it may not be adequate since obtaining takedown orders only acts as a reactionary measure – which is too late to stop content from going viral. Therefore, preventive measures, like robust SOPs/policies/directions for intermediaries would be crucial alongside specific legislations.
Footnotes
1. Krishna Kishore Singh v. Sarla A Saraogi, 2023 SCC OnLine Del 3997
2. https://edition.cnn.com/2024/05/17/tech/voice-actors-ai-lawsuit-lovo/index.html
4. https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1975445
5. https://www.cdomagazine.tech/aiml/tennessee-introduces-new-law-to-prevent-ai-voice-cloning
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.