ARTICLE
27 August 2024

AI Law Is Emerging: Are You Prepared For It?

MM
McLane Middleton, Professional Association

Contributor

Founded in 1919, McLane Middleton, Professional Association has been committed to serving their clients, community and colleagues for over 100 years.  They are one of New England’s premier full-service law firms with offices in Woburn and Boston, Massachusetts and Manchester, Concord and Portsmouth, New Hampshire. 
Artificial intelligence is not just useful to compose an essay, legal brief, or article for the Bar News.
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Artificial intelligence is not just useful to compose an essay, legal brief, or article for the Bar News. Businesspeople who believe that AI is far from widespread application or inapplicable to their operations underestimate the technology's versatility and speed of implementation. AI already animates, or soon will be integral to, most systems that we use at work every day, such as email, video conferencing, notetaking, document creation, data summarization and analysis, file management, accounting, marketing, personnel management, etc.

Leveraging the utility and profitability of AI requires developing aptitude for it. However, that is a topic for another article. This article addresses emerging AI law, because implementing AI in a manner that will be sustainable into the future necessitates compliance with that law.

The European Union adopted the first comprehensive AI law in March 2024. Colorado followed in May, and similar bills are pending in many other states and likely to pass during upcoming legislative sessions. While these initial AI laws are not identical, and regulations likely will gain uniformity as adopted more broadly, three basic principles emerge.

  1. Prohibited Uses. AI use for some purposes may be so risky that it is prohibited by law. Examples include (a) manipulating an individual's decision-making in a manner that harms that individual or others, (b) scoring individuals based on personal characteristics in an unjustifiably determinant way, (c) conducting surveillance that invades expectations of privacy, including certain collecting of facial recognition and other biometric data, and (d) deducing or predicting characteristics or behaviors in workplaces and schools, or based on biometric or health data. In addition to statutory prohibitions, businesses that implement AI often prohibit by policy its use for other improper purposes, including (e) activity that is unlawful, discriminatory, fraudulent, deceptive, defamatory, offensive, abusive, dishonest, unethical, or likely to harm the business, employees, customers, or others, (f) impersonating another, creating or disseminating false information, or violating the rights of others, including property, privacy, personal, and intellectual property rights, and (g) disrupting, damaging, or gaining unauthorized access to any other technology or computer system.
  2. Restricted Uses. Use of AI for certain other purposes presents risks of such harm that, while not prohibited, businesses must take precautions before using AI for those purposes. Examples include (a) certain facial or biometric identification, (b) employment decision-making, including related to hiring, placement, promotion, evaluation, compensation, and training, and (c) processing sensitive personal information, including information about children, race, ethnic origin, national origin, citizenship, immigration status, religion, politics, sexual orientation, identity or activity, biometrics, physical or mental health, geolocation, social security number, financial account number, and governmental identification number.

Before using AI for such a purpose, a business must conduct an AI impact assessment to identify the risks and utility of using AI for that purpose, implement safeguards to mitigate those risks, and memorialize the assessment in a report. Safeguards to mitigate risks could include increasing some or all of the ‘Fundamentals for AI Use' discussed below, as well as limiting the individuals permitted to use AI and scope of AI used for the heightened-risk purpose.

  1. Fundamentals for AI Use. Fears about AI stem largely from the power of the technology and the lack of society's knowledge of and experience with it. The following fundamentals for AI use are designed to allay those fears. Additionally, while these safeguards can be increased based on the higher level of risk of particular AI use, a business should consider and implement these fundamentals for all use of AI.
  • Data Integrity. To train AI for a particular purpose, a business should use only data that is accurate, legitimate, reliable, and suitable for that purpose, and should not use data that contains inaccurate, discriminatory, biased, or offensive information or that generates those outcomes.
  • Transparency. Before using AI for a particular purpose, a business should notify individuals who may be impacted about its use of AI for that purpose. When disclosing any outcome generated in whole or in part from AI, the business should notify individuals that it used AI to generate the outcome and the scope of AI use.
  • Human Control. Before using any outcome generated in whole or in part by AI, a business should have the outcome reviewed by a human qualified to ensure the accuracy, legitimacy, and reliability of the outcome. The business should train those individuals to recognize inaccurate, discriminatory, and biased outcomes, and empower them to escalate potential issues to appropriate personnel.
  • Testing and Auditing. Before using AI for any purpose, a business should test it to ensure that the data used to train it and the outcomes yielded by it are accurate, legitimate, reliable, and suitable for that purpose, and are not discriminatory or biased. After that testing, and once using the AI for a purpose, the business should periodically audit any additional data used to train the technology and the outcomes yielded by it to ensure the AI continues to adhere to these principles.
  • Cybersecurity and Privacy. Cybersecurity and privacy laws apply to AI. If privacy law requires notice to and consent from individuals for use of their personal information, the business should do so before using AI to process such information. Similarly, businesses must implement reasonable technological, physical, and administrative safeguards to protect the confidentiality, integrity, and availability of data inputs and AI outcomes involving personal information.

Our use on AI is as inevitable as our use of email and the Internet was 30 years ago. Businesses need to start now developing an aptitude with the technology, and a strategy to implement it in compliance with emerging AI law.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More