AI Software Manipulated Into Leaking Sensitive Data

FK
Frankfurt Kurnit Klein & Selz

Contributor

Frankfurt Kurnit provides high quality legal services to clients in many industries and disciplines worldwide. With leading practices in entertainment, advertising, IP, technology, litigation, corporate, estate planning, charitable organizations, professional responsibility and other areas — Frankfurt Kurnit helps clients face challenging legal issues and meet their goals with efficient solutions.
As large language models (like ChatGPT) and other types of generative AI grow in popularity, researchers are starting to uncover their vulnerabilities and the ways they can be exploited for nefarious purposes.
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

As large language models (like ChatGPT) and other types of generative AI grow in popularity, researchers are starting to uncover their vulnerabilities and the ways they can be exploited for nefarious purposes. Earlier this month, researchers at Robust Intelligence published a blog post detailing the ways they were able to prompt the NVIDIA AI Platform-designed for businesses to customize and deploy their own generative AI models (for example, to integrate with customer service chat bots)-into revealing personally identifiable information from a database.

Though NVIDIA has begun to address and resolve the issues the researchers identified, the research indicates that, broadly, AI "guardrails"-the rules, filters, and other mechanisms designed to ensure safe and ethical use of the applicable software-may not be sufficient to protect against undesirable outputs, especially where the AI model is trained on "unsanitized" data.

Key Takeaways:

  • Even advanced AI systems can be vulnerable to data leaks and other exploits
  • There may be severe legal consequences for organizations that fail to prevent AI models from revealing personally identifiable information
  • Organizations need robust internal guidelines and policies detailing how AI (as well as sensitive data) should and should not be used
  • In addition to those written policies, organizations should maintain meaningful human oversight to regulate the use of AI

www.fkks.com

This alert provides general coverage of its subject area. We provide it with the understanding that Frankfurt Kurnit Klein & Selz is not engaged herein in rendering legal advice, and shall not be liable for any damages resulting from any error, inaccuracy, or omission. Our attorneys practice law only in jurisdictions in which they are properly authorized to do so. We do not seek to represent clients in other jurisdictions.

We operate a free-to-view policy, asking only that you register in order to read all of our content. Please login or register to view the rest of this article.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More