Artificial intelligence continues to find application in a rapidly-growing number of sectors and the adult entertainment industry is no exception. While deepfakes continue to plague communities, legitimate AI-generated pornography companies face a unique set of legal issues that they must address to ensure they operate within the law. As technology makes AI-created images and videos ever more realistic, society must address the ethical and legal implications of such advancements, ensuring that individuals' rights are protected and that appropriate measures are taken against those who misuse these tools for malicious purposes. AI pornography companies face a myriad of legal issues including consent, privacy, intellectual property, liability, and regulatory compliance. Navigating these legal complexities requires careful planning, robust policies, and adherence to ethical guidelines. Proactive legal risk management is thus critical in this evolving field.

US Policy

The US has always been ahead of the curve in taking a stance against the production, dissemination, and viewing of even fake child sexual abuse material if the obscene images depict someone who is "virtually indistinguishable" from a real child. The law makes it clear that "it is not required ...that the minor depicted actually exist(s)." Federal law enforcement has also taken an aggressive approach to AI-generated pornography and "revenge porn." As of the time of writing, it was widely anticipated that the Senate would hear a bill that would make sharing non-consensual AI-generated pornography illegal and open additional legal recourse for victims.

However, purveyors of otherwise legitimate AI pornography – that which does not infringe on privacy and represents willingly participating adult performers – still must take care that they comply with laws designed to protect viewers, performers, and copyright holders so they can avoid potentially costly civil and criminal charges. Due diligence and effective legal counsel are even more critical given access to advanced generative technologies at the disposal of even novice users. Until just a few years ago, a user needed to possess a certain level of technical acumen to create AI-generated content. However, it is now just a matter of downloading an app, writing a prompt, and clicking a button. Experts say this has given rise to an entire industry that thrives on creating and sharing digitally created sexually explicit media, including websites that have hundreds of thousands of paying members.

Determining whether AI porn is legal is a tricky subject with multiple factors in play. However, even if legislatures and courts allow the creation of AI-generated pornographic content, its distribution is another matter.

Obscenity – Obscenity, defined by the Supreme Court in Miller v. California (1973) as pornography that, taken as a whole, appeals to prurient interests; describes sexual conduct in a patently offensive way; and lacks any serious literary, artistic, political, or scientific value. Material deemed obscene is not constitutionally protected as free expression and can be subject to state bans. Simple nudity does not fall into this category and is thus protected across all states. State and local jurisdictions may, however, criminalize images and media that show more than just nudity but fall short of the definition of obscenity. Newport News, VA, for instance, makes it a crime to produce and distribute material that appeals to a "shameful or morbid interest in nudity, sexual conduct, (or) sexual excitement..."

The question then arises as to the relevance of these state laws in the contemporary digital landscape. The internet has undeniably blurred geographical boundaries, making it increasingly complicated to enforce state-specific obscenity laws on online content.

Access by Minors – As the digital landscape continues to evolve, the AI adult content industry faces unique challenges in maintaining compliance with various state laws regarding age verification. These laws are particularly relevant for platforms that utilize AI to generate adult content.

Typically, websites offering adult content include an age-verification notice, requiring users to confirm they are 18 years or older before accessing explicit material. However, certain states have more stringent requirements. Specifically, Utah, Arkansas, Virginia, Mississippi, and Louisiana mandate operators to verify the age of users before allowing access if the site houses a specific amount of pornographic content. This verification process involves the user providing a government ID card or digital ID card. We anticipate other states soon may adopt similar legislation.

Notably, many of these laws incorporate AI-generated material. Both Utah and Arkansas include "descriptions of actual, simulated, or animated displays or depictions" of nudity or sexual acts within their definitions of regulated material. Producers and distributors of non-obscene pornography could also face prosecution depending on the jurisdiction if the material is displayed in areas where minors might be present (Utah) or simply would be considered "harmful to minors" (Indiana).

Given these legal complexities, it is advisable to implement geofencing strategies for the states mandating age verification. Pornography companies may need to either refrain from offering their platform in these states or invest in the legally mandated identity verification processes.

Deepfakes – Nearly all states have laws that prohibit the non-consensual use of photos, videos, or images of real-life persons in adult content. This includes "deepfakes," or modification of a person's image so that it falsely appears the person is performing an act or in a particular setting, as well as "revenge porn." A number of states have passed or are considering laws that apply specifically to deepfakes.

The current federal law, commonly known as Section 230, generally shields websites from liability for user-posted content unless it concerns child sex trafficking and the company has "actual knowledge of" and "assists, supports, or facilitates" the trafficking venture. However, there is no existing guidance on content generated via website tools, such as AI, which also displays the content. That is, Section 230 protection does not apply in the case of content that the platform creates or plays a significant role in creating. Platforms that generate pornographic content at the direction of users or perhaps based on images users upload and then display that content to the user and others run the risk of being complicit in the creation of deepfakes and may put themselves at great liability risk.

Platforms that allow users to upload images as inputs for pornographic AI output should ensure the third-party images are combined with a sufficiently large database to ensure the resulting images cannot be identifiable as particular real-life persons. The platform's terms of service should explicitly prohibit deepfakes and the use of real people's images without their consent. They should implement procedures to detect deepfakes and promptly remove content upon receiving a credible non-consensual use complaint.

Age Verification and Recordkeeping – Federal law, specifically "Section 2257," stipulates stringent requirements for producers of pornographic content involving real-life performers.

Under Section 2257, producers must collect and maintain records affirming that all performers were at least 18 years old at the time of filming. This extends to the production of "computer-manipulated images" of real-life individuals. Additionally, producers are required to display notices on their content indicating where these records are kept.
It seems reasonable to assume that any platform, or at least its users, would fall under this law's purview, provided the input data incorporates images of actual people. However, the law does exempt "digitization of existing images," but only in the context of photo processing where there's no commercial interest in the content.

It seems reasonable to assume that any platform, or at least its users, would fall under this law's purview, provided the input data incorporates images of actual people. However, the law does exempt "digitization of existing images," but only in the context of photo processing where there's no commercial interest in the content.

The application of Section 2257 isn't entirely clear, even for traditional pornography. A notable case in 1998, Sundance Associates v. Reno, saw the 10th Circuit Court of Appeals rule that the law did not apply to entities not involved in "hiring, contracting for, managing, or arranging for the participation of the depicted performers." If this interpretation holds, platforms would be exempt from the law, but this ruling currently applies only in states under the 10th Circuit's jurisdiction.

Given these complexities, it may not be feasible for businesses to verify the ages of performers in the default input library. For user-uploaded images, the terms of service should mandate compliance with Section 2257. However, unless users upload only images they've created themselves, practical implementation may prove challenging.

Regrettably, complete avoidance of risk under Section 2257 seems unattainable. As we continue navigating this intricate legal landscape, staying informed and adhering to best practices will be crucial for mitigating potential risks and ensuring regulatory compliance.

Child Pornography – The production and dissemination of child pornography, defined as material depicting persons under 18, are illegal under both federal and state laws. These laws could potentially implicate a company either through the use of child pornography in input data or if the output data is deemed to constitute child pornography.

  • Input Data In instances where user-posted images feature child pornography, Section 230 protects platforms from liability, as established in Does v. Reddit, Inc. (2022). This protection should extend to images uploaded by users as input data. Nonetheless, it is recommended that platforms implement policies to swiftly remove such content upon receiving credible information that it constitutes child pornography.

    As noted, the applicability of Section 230's liability shield remains uncertain when the platform's AI technology is involved in producing the content, as opposed to more common scenarios where a user posts child pornography on a website. Some AI porn business models use the technology to modify or amalgamate user-supplied content to generate new images, albeit without directly filming the subjects.

    As discussed above, companies must employ all reasonable measures to ensure their default image libraries do not contain child pornography. Section 230 will not protect the company from liability in cases where such images exist.

  • Output DataFederal law categorizes child pornography as including "computer-generated images". However,Ashcroft v. Free Speech Coalition(2002), determined that "virtual child pornography," referring to images and videos appearing to depict minors but not produced using actual minors, was protected by the First Amendment and thus, could not be banned.

    Businesses should screen for and promptly remove images seemingly portraying minors in sexual contexts. Though US producers likely would not be held liable for possessing AI-generated content of non-existent children and US platforms probably would be immune under Section 230, recent news reports highlight a case where a Canadian man received a prison sentence for creating AI-generated deepfake child pornography.

Copyright – The question of whether copyright infringement occurs when copyrighted content is input into an AI algorithm for content generation is currently unresolved. Some legal experts argue that if the input content cannot be recognized in the output, its use might fall under the "fair use" doctrine. However, this contention is being challenged in court by rights-holders. This issue is expected to attract considerable legislative and regulatory attention in the coming years.

In terms of input material used by a platform, it is advisable to use licensed content or public-domain material to avoid potential copyright infringement lawsuits, particularly given the litigious nature of pornography publishers in the United States. Using copyrighted material sourced from the internet without permission could significantly expose companies to legal risks.

For input material uploaded by users, companies should be safeguarded by the US Digital Millennium Copyright Act (DMCA). The DMCA's notice-and-takedown provision shields platforms from liability provided they promptly remove copyrighted content upon receiving a notice. However, how the DMCA applies to AI-generated content remains unclear.

Turning to the output side, under current US law, AI-generated output is not protectable by copyright. Only human-created modifications to AI images are protected. This implies that the AI output on a platform would fall into the public domain, allowing anyone to use it without requiring a license or paying royalties.

However, using copyrighted images or videos without consent likely constitutes copyright infringement. Therefore, AI porn producers should secure a copyright license when using another's intellectual property rights, as the legal consequences could be severe.

Determining copyright ownership becomes complex when AI generates images. AI systems draw from a vast database of training data, much of it copyrighted. If an AI-generated image draws inspiration from 300 different pictures, would it be necessary to obtain permission from each photographer and model? Moreover, it may be impossible to ascertain which training images the AI used to generate its "original" image.

Conclusion

For now, Section 230 of the Communications Decency Act shields websites and internet providers from much legal liability for false or defamatory information posted by users:

"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

Still, individuals and corporations must be aware of the legal implications surrounding their actions when using AI-generated pornography. Numerous applications and websites currently promote the creation of personalized adult content. It is always best to consult a qualified legal professional prior to participating in AI-generated pornography in any capacity, whether as a performer, producer, consumer, content-sharing platform, or intermediary.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.