Below is this week's tracker of the latest legal and regulatory developments in the United States and in the EU. Sign up here to ensure you do not miss an update.

AI Intellectual Property Update:

  • The Copyright Office plans to put out three reports this year about AI and copyright that "are set to be hugely consequential," in response to the hundreds of public comments that the office received.
  • The FTC launched an inquiry into "investments and partnerships involving generative AI," sending information requests to Alphabet, Amazon, Anthropic, Microsoft, and OpenAI. FTC Chair Lina M. Khan said: "Our study will shed light on whether investments and partnerships pursued by dominant companies risk distorting innovation and undermining fair competition."
  • There is debate between the FTC and DOJ over who can investigate AI companies for allegedly illegally scraping content from websites to train AI models.
  • Entrepreneurs and activists are releasing tools designed to enable artists to modify their artwork so that it can't be used in training GenAI models. Nightshade, a tool released this week, makes subtle changes to the pixels of an image to trick models into thinking the image depicts something different from what it actually does. Kin.art, another new tool, uses image segmentation (i.e., concealing parts of artwork) and tag randomization (swapping an art piece's image metatags) to interfere with the model training process.

AI Litigation Update:

  • The Copyright Office's decision to deny copyright to an AI-generated work is being appealed to the D.C. Circuit. Plaintiff Stephen Thaler's appeal argues that: "Nothing in the Copyright Act requires human creation. Instead, it explicitly allows for non-human authors." The appeal further argues that "non-human authorship has been a fixture of American copyright law for more than a century and there is no requirement to identify any creative contribution by a natural person. There is also no case that stands for the proposition that the Act contains an implicit Human Authorship Requirement. Once more to the contrary, the Supreme Court has repeatedly held that the Act is intended to be interpreted expansively and dynamically to capture the benefits of technological progress." The case is Thaler v. Perlmutter, No. 23-5233 (D.C. Cir.).
  • Anthropic asked a Tennessee federal court to reject an early bid by Universal Music, ABKCO, and Concord Music Group to stop it from using and reproducing their song lyrics through its LLM Claude. Anthropic told the court that the three music publishers could not prove they were being irreparably harmed. Anthropic also argued that the publishers had brought their lawsuit against the company in the wrong court.
  • Google settled Massachusetts startup Singular Computing LLC's $1.6 billion patent infringement case just before closing arguments were set to begin. Singular claimed Google stole its technology to boost its machine learning products.

AI Policy Update—European Union:

  • The European Commission announced the establishment of an EU AI Office, which will work to ensure the development and coordination of AI policy at the EU level. The office will also supervise the implementation and enforcement of the forthcoming EU AI Act and it will be responsible for investigating possible violations of the EU AI Act's rules for general purpose AI models. At the international level, the EU AI Office will contribute to international cooperation on AI, including on the promotion of regulatory guardrails and governance of AI. The office will become active on February 21, 2024.
  • The European Commission, together with some EU Member States, established two European Digital Infrastructure Consortiums (EDICs), including to address the shortage of European language data for training of AI models and support the development of European LLMs.
  • The European Commission launched an AI innovation package of measures to support European startups in the development of trustworthy AI in line with the EU values and rules:
    • An amendment of the EuroHPC Regulation to set up "AI Factories," including changes to enable the development of AI applications based on general purpose AI models.
    • An EU AI Start Up and Innovation Communication outlining additional key activities pertaining to the European Commission's financial support dedicated to generative AI, and initiatives aiming at strengthening the EU's generative AI talent pool and encouraging investments to AI startups.
  • The European Commission adopted a Communication which outlines its strategic vision to foster the internal development and use of lawful, safe and trustworthy AI systems. With this Communication, the European Commission aims to anticipate and prepare internally for the implementation of the forthcoming EU AI Act.
  • The European Union Agency for Fundamental Rights (FRA) launched a new research project with the goal of providing guidance on how to assess the impact of the so-called high-risk AI systems, as defined in the forthcoming EU AI Act, on fundamental rights.
  • The Dutch Data Protection Authority published its second AI & Algorithmic Risks Report, which shares a national master plan aiming to prepare the Netherlands for a future with AI. The master plan aims to achieve effective management and control of the use of AI and algorithms by 2030, involving collaboration among companies, government, academia, and NGOs. The strategy also includes annual goals and agreements and the implementation of regulations such as the EU AI Act.
  • The Department of Public Expenditure, NDP Delivery & Reform of the Government of Ireland published Interim Guidelines for Use of AI in the public service. The guidelines describe the Irish Government's commitment to ethical AI, key considerations before deciding to adopt an AI tool, safeguards and related support and resources available for public organizations.

AI Policy Update—International:

  • A court in Beijing, China granted copyright to an image generated by artificial intelligence after the plaintiff, who created the image using Stable Diffusion, took legal action against a blogger for unauthorized usage.
  • The Organization for Economic Co-operation and Development (OECD) published a working paper which discusses the issue of collective action for responsible AI in health. The paper provides an overview of the background and current state of AI in health, and perspectives on certain opportunities, risks, and barriers to success.
  • The WFEO-CEIT released its "Safety & Global Governance of Generative AI" report. The report includes 29 commentaries from over 40 global experts across different continents in policymaking, entrepreneurship, scholarship, and engineering. Their contributions cover various topics concerning the safety and global governance of generative AI, on which they provide their insights and recommendations.
  • The World Health Organization (WHO) released an AI ethics and governance guidance for large multi-modal models. The guidance includes more than 40 recommendations, with the goal to ensure the appropriate use of large multi-modal models.
  • The UK Government published a framework with ten core principles for the safe, secure and effective use of Generative AI in government and public sector organizations.
  • The Australian Government shared its interim response to the safe and responsible AI consultation it held in 2023. The interim response outlines the contributions received from the Australian public, academia and businesses about safe and responsible AI. It also details how the government is taking action in this regard.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.