ARTICLE
9 September 2024

How AI Is Impacting Legal Issues And Priorities In The Fashion Industry

PC
Pryor Cashman LLP

Contributor

A premier, midsized law firm headquartered in New York City, Pryor Cashman boasts nearly 180 attorneys and offices in both Los Angeles and Miami. From every office, we are known for getting the job done right, and doing it with integrity, efficiency and élan.
The fashion industry, long known for its emphasis on creativity and innovation, is undergoing a transformative shift due to advancements in artificial intelligence (AI).
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

The fashion industry, long known for its emphasis on creativity and innovation, is undergoing a transformative shift due to advancements in artificial intelligence (AI). This technology is not only enhancing design processes and consumer experiences, but also reshaping the legal landscape, particularly as it relates to intellectual property protection and enforcement, design development and protection, the ramifications of deep fakes on influencer and celebrity endorsements and collaborations, and how to prepare for all of the above in standard industry contracts going forward.

This article delves into these key areas, examining how AI is impacting the fashion industry and how legal teams can prepare for these shifts.

IP Rights in AI-Generated Works

AI's role in generating new fashion designs is growing. Algorithms can analyze vast datasets of fashion trends, historical designs, and consumer preferences to create innovative patterns, styles, and collections. However, this raises complex questions about IP rights. Traditionally, in the U.S., designers and/or brand-owners can obtain various forms of IP rights in their designs, logos, graphics, photographs, and brand names (among other things), including trademark, trade dress, patent, and copyright protections. In the context of AI-generated creations, whether IP protection is available differs depending on discipline.

In particular, the Copyright Act only protects "original works of authorship," which has been interpreted to mean works created by humans. For patents, AI systems may not be listed as inventors, but the use of an AI system by a human will not preclude that person qualifying as an inventor if they "significantly contributed" to the claimed invention. In contrast, AI-created trademarks may be protected because trademark rights arise from use in commerce, not upon its creation or invention.

To protect their AI-generated designs, human designers should be involved in the final stages of creation, ensuring that the human element is present to claim inventorship. Companies must continue to include work-for-hire clauses in employee handbooks and consulting agreements

AI's role in generating new fashion designs is growing. Algorithms can analyze vast datasets of fashion trends, historical designs, and consumer preferences to create innovative patterns, styles, and collections.

to ensure that all rights vest directly in the company from the date of creation. Because of the issues around protectability of AI-generated works, individual creators should expressly represent and warrant either that they will not use AI in the creation of any of the deliverables and/or that it will do so at the direction and with the guidance of the company so as to ensure that the company can dictate the appropriate level of human involvement is used to allow the same to (potentially) still be considered a work of authorship within the meaning of then-applicable law.

AI in the Context of Trade Dress Development and Protection

Trade dress refers to the visual elements of a product, product design, or its packaging, which element is nonfunctional and, for product designs, has achieved consumer recognition as the source of the product (packaging can be inherently distinctive). Brands are likely to use AI to develop distinctive trade dress elements by analyzing consumer preferences, market trends, and competitive products. This data-driven approach allows brands to create a unique and appealing trade dress that resonates with consumers. When seeking trade dress protection, AI tools can generate information to demonstrate that the design has achieved secondary meaning.

With all the above-referenced beneficial aspects of AI, the use of AI in trade dress development also raises legal and ethical questions. For example, if AI generates trade dress elements based on existing designs, it may inadvertently lead to similarities that could result in trade dress infringement claims. Ensuring that AI-generated trade dress is genuinely distinctive and non-infringing requires careful oversight and collaboration between designers and legal experts.

On the other side of the coin, many fast-fashion brands are purposefully using AI to scour social media to identify fashion trends and then quickly manufacture copycat products that are price-point accessible for young consumers, including clothing, handbags, and accessories.

Companies should invest in robust IP management systems that leverage AI to monitor and enforce IP rights globally. These systems can track unauthorized use of designs, identify potential infringements, and streamline the enforcement process, thereby safeguarding the creative assets of fashion brands.

The Rise of Deep Fakes and Their Impact on Talent Endorsements and Brand Collaborations

Given that talent endorsements and celebrity collaborations are valuable forms of marketing in fashion, the talent's reputation is of paramount concern to fashion brands contracting with them. A rising concern is the potential for the rapid dissemination of believable misinformation and/or past truthful information about a brand that can cause enormous reputational harm. Even before the rising threat of AI, reputational due diligence

While it will remain fine to have officers and managers submitting reviews, the relationship must be clearly and conspicuously disclosed.

and contractual protections were critical when entering into talent agreements, with the brand needing to research and vet the talent's background and reputation. This process continues throughout the duration of the relationship to guard against reputational harm, which requires vigilance and close communication with the talent's team to navigate fraught situations and potential misinformation.

Talent agreements typically include a "Morals Clause" or "Conduct Clause." These clauses provide an express remedy against talent whose conduct, statements, and/or associations adversely affect or are likely to adversely affect the public's perception of that talent, and therefore any project, brand, or company with whom the talent is associated.

The growing sophistication and ubiquity of AI technology complicates the traditional process of reputation management, particularly with the recent advent of deep fakes. AI-created deep fakes manipulate images and videos to create realistic but fabricated content, often featuring a near-identical replica of a specific person or product. In the fashion industry, deep fakes have significant implications, particularly in marketing, endorsements, and consumer trust.

While deep fakes can be used creatively, such as for virtual fashion shows or personalized marketing (only done lawfully with the consent of the talent), unauthorized deep fakes pose risks related to authenticity and brand integrity. Because the bad actor is not actually the talent, contracting around the relevant behavior (as is traditionally done using a Morals Clause) has become more complicated. Put simply, deep fakes of talent raise questions as to what constitutes "conduct" to trigger the Morals Clause.

Fashion companies should begin thinking more expansively when drafting a Morals Clause about what events should be anticipated. Because reputational harm is the cornerstone of the Morals Clause, and AI can create such harm without any actual conduct of the talent, fashion brands are likely to push for wider-reaching Morals Clauses that will cover public accusations and perceptions of misconduct as opposed to mere misconduct. Brands should expand the timeframe during which the accusations or alleged conduct took place because AI deep fakes can be made to appear to have taken place long ago, but the fabricated "conduct" is allegedly "discovered" only today. Similarly, the power of AI as a search tool might also change how easily one might uncover truthful past misconduct otherwise buried on the internet, which could help companies better vet the talent. With that said, fashion companies should also be prepared for talent to push back on these requests and seek an express carve-out to their liability for deep fakes.

Regardless of how the liability is shared for AI-generated "misconduct," talent agreements should include forward-looking remedial measures to mitigate the damage caused by the relevant misinformation. Companies should expressly require talent to take reasonable remedial measures to correct the misinformation via social media channels and through other means. Such measures might include an obligation for talent to actively engage in public discussions about the misinformation and that they cooperate with the brand in undertaking remedial campaigns. The question becomes at whose cost should such services be performed? Unlike a typical breach of contract, which places the burden and cost of "curing" misconduct on the party who engaged in that behavior, the harm caused by a deep fake was not by either party—fairness may call for a shared burden on company and talent.

Deep fakes can also be used to counterfeit digital products or images, making it difficult for consumers to distinguish between genuine and fake items as shown online in a digital format. For fake product offerings, brands should invest in technologies that can detect and prevent deep fakes, such as digital watermarking and blockchain-based authentication systems. The fashion industry should create industry-wide standards to combat the spread of deep fakes and protect consumer trust.

Takeaways

AI has changed the way we operate and create. Brands need to understand how employees are relying upon AI to develop fashion designs and innovations, and then carefully navigate which IP means are available for protection. Influencers and celebrities need to be keenly aware of how their name, image, and likeness are used on the web and take swift measures to protect their reputations. We are working with clients, on both the brand side and the talent side, to make sure agreements include provisions to anticipate AI-generated issues, as well as guiding our clients through measures to ensure that their creations can be protected and their rights globally enforced.

Originally published by ALM LAW.COM New York Law Journal.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More