The Risks Of Generative AI: Fake News (Video)

BW
Byrne Wallace

Contributor

Byrne Wallace
Generative AI products are trained on large databases, which may not always be accurate and in some cases may contain biases. Some AI tools are also known to "hallucinate"...
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Generative AI products are trained on large databases, which may not always be accurate and in some cases may contain biases. Some AI tools are also known to "hallucinate" - meaning they generate inaccurate or irrelevant information.

In 2023 a New York law firm was fined for using ChatGPT to create court pleadings to support its client's claim for damages. When they proceedings came to court, it became apparent that none of the cases cited in the pleadings were real. The AI program couldn't find any cases that backed the claim, so just made them up.

So if you, or unknown to you - your employees, are using generative AI to produce information about your company's products, finances, or to generate proposals to customers, there is a risk this information could be inaccurate. Therefore, you might want to consider some form of human intervention.

For more information, contact Victor Timon or other members of our Technology Team.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More