ARTICLE
2 August 2024

Ferrari Targeted By Deep Fakes

RR
Rahman Ravelli Solicitors

Contributor

Rahman Ravelli is known for its sophisticated, bespoke and robust representation of corporates, senior business executives and professionals in national and international matters.
It is one of the fastest-growing and most highly-regarded, market-leading legal practices in its field. This is due to its achievements in criminal and regulatory investigations and large-scale commercial disputes involving corporate wrongdoing and multi-jurisdictional enforcement, and its asset recovery, internal investigations and compliance expertise.
The firm’s global reach, experienced litigators and network of trusted partner firms ensure it can address legal matters for clients anywhere in the world. It combines astute business intelligence and shrewd legal expertise with proactive, creative strategies to secure the best possible outcome for all its clients.
Rahman Ravelli’s achievements in certain cases have even helped shape the law. It is regularly engaged by other law firms to provide independent advice.

Angelika Hellweger highlights a rise in AI deepfake fraud, exemplified by recent attempts to impersonate Ferrari's CEO and WPP's CEO. Companies must enhance defenses against these sophisticated schemes, as deepfake incidents are increasingly prevalent.
European Union Criminal Law
To print this article, all you need is to be registered or login on Mondaq.com.

Angelika Hellweger considers the latest high-profile criminal use of AI

Ferrari is reported to be the latest company to be targeted by criminals using AI deep fakes.

A Ferrari executive was sent WhatsApp messages that claimed to be from the carmaker's Chief Executive Officer, Benedetto Vigna. The messages talked of a big acquisition that was planned and said the CEO may need the executive's help.

But the messages had not come from Vigna's usual business mobile number and the profile picture of the CEO was different.

One of the messages said, “Be ready to sign the Non-Disclosure Agreement our lawyer is set to send you asap,''. Another talked of Italy's market regulator and stock-exchange having already been informed and called for the “utmost discretion.”

A phone conversation was then conducted, with the executive receiving a call from a voice that made a convincing impersonation of Vigna. The fake Vigna explained that he was calling from a different mobile phone number because he needed to discuss something confidential. This deal, he said, may have some problems relating to China and needed a currency-hedge transaction to be carried out.

The executive was shocked and his suspicions were increased when he detected what he thought were slightly mechanical tones in the voice of the man who was claiming to be his boss. He said he needed to identify the CEO – and asked the caller the name of a book that Vigna had recommended to him just a few days earlier. The call ended suddenly after the question was asked.

The Ferrari incident comes just two months after Mark Read, the CEO of advertising giant WPP, was targeted by an equally elaborate deep fake scam that involved him being imitated on a video conference call.

Increase

While there is an increase in criminals turning to AI to clone voices and create convincing images and videos, so far there have not been reports of these being hugely successful. The case of the unnamed multinational company losing $26 million after an employee in Hong Kong was deceived using deepfake technology made headlines earlier this year. But the expected flood of similar examples has not yet materialised.

Yet the scale of this problem cannot be ignored. Statistics have shown that there are now three times as many deepfake videos and pieces of AI-generated content as there were two years ago. The same two-year period has seen a 700% increase in voice deepfakes being posted online.

One of the biggest challenges in law enforcement is finding the perpetrators. Organisations, therefore, need to be prepared for - and protected against - the possibility of a deepfake fraud attack. This requires risk assessments being carried out into all aspects of a company's work, appropriate staff training and a methodical approach to identifying and introducing the necessary measures.

This cannot be a mere tick box exercise as there is no short cut to nullifying the threat posed by deepfakes.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More