EU's Highest Court Rules on Automated Decision-Making
The Court of Justice of the EU ("CJEU") recently issued a significant ruling regarding the scope of data subjects' right of access under the GDPR in relation to automated decision-making, including profiling. The CJEU clarified that individuals must receive meaningful information about the logic involved in automated decision-making processes, balancing transparency with the protection of other fundamental rights and commercial considerations, such as third-party data or trade secrets, in line with the principle of proportionality.
The CJEU specified that individuals must receive an explanation of the "procedures and principles" applied in automated decision-making, including what personal data was used and how it was utilized, rather than a detailed explanation of the algorithms or the full algorithm itself. This information should be provided in a concise, transparent, intelligible, and easily accessible form, using clear and plain language.
The Court determined that Member States cannot create laws that entirely deny individuals the right of access in the event that such a right might jeopardize a trade secret. Instead, organizations are required to submit the purported protected information to the relevant supervisory authority or court. This allows for a case-by-case assessment to balance the rights and interests involved and determine the extent of the data subject's right of access to the information.
Takeaway:The CJEU's clarification that "meaningful information" does not require disclosing underlying algorithms or trade secrets to data subjects will reassure organisations. However, businesses may be concerned about having to share trade secrets with supervisory authorities or courts. The decision highlights the importance of transparency and clarity in automated decision-making, which is particularly challenging with AI-driven processes where staff may be distanced from the decision logic.
Illinois Judge Scraps Rulings Applying BIPA Change Retroactively
Recently, Judge Elaine E. Bucklo of the United States District Court for the Northern District of Illinois vacated her prior rulings in two separate cases that applied a recent amendment to the state's Biometric Information Privacy Act ("BIPA"). Specifically, the court vacated its prior determination that the amendment was a clarification of existing law and therefore could be applied retroactively to conduct occurring before the amendments went into effect.
The BIPA amendment at issue—which went into effect on August 2, 2024—created a new limitation on liability. Originally, BIPA (as interpreted by the Illinois Supreme Court) provided statutory damages on a per-violation basis, meaning that a defendant could be held liable for wrongfully collecting or disclosing the same information multiple times (e.g., an employer who fingerprinted employees each day). Under the amendment, all unauthorized collection (or disclosure) of the same biometric information pertaining to the same person collected using the same method constitutes a single violation, no matter how many times that specific piece of biometric information is shared.
The first case to consider the retroactivity of the amendment wasGregg v. Central Transport LLC, decided in November of 2024. InGregg, Judge Bucklo held, as a matter of first impression, that the amendment was a "clarification" to existing law, so it could be applied retroactively. In other words, defendants could leverage the new rule limiting recovery for repeated violations even in cases where the plaintiff filed suit before August 2, 2024. Judge Bucklo took the same approach two days later inAmigon v. Old Dominion Freight Line Inc. FollowingGregg, however, two other judges in the same district reached the opposite conclusion, that the BIPA amendment applied only prospectively—Judge Alexakis inSchwartz v. Supply Network, Inc. and Judge Ellis inGiles v. Sabert Corp. Now, on reconsideration of her orders inGreggandAmigon, Judge Bucklo has reversed course and joined her colleagues in holding that "the better interpretation of the amendment is that it effected a change in the law" and, therefore, cannot be applied retroactively to claims that were filed before the amendment.
Takeaway: The plaintiffs and defense bars alike have been waiting for further clarification as to whether the BIPA amendment would apply retroactively. The implications are huge—with a difference in damages that could swing billions of dollars in large pending consumer class actions. While this question is by no means settled, these two cases are a setback for potential BIPA defendants in limiting the exposure from claims filed pre-amendment. What was previously a 2-1 split among district courts has now become a 3-0 consensus against retroactivity. Notably, however, these decisions all have involved litigation that commenced before the amendment. Courts have not yet considered whether this approach can be extended to future suits arising out of pre-amendment conduct. That will no doubt be the subject of more litigation. Stay tuned.
New York AG Secures Another Data Security Settlement with Auto Insurer
On March 20, 2025, New York Attorney General Letitia James secured $975K in penalties from Root Insurance Co. ("Root"), in connection with claims that the auto insurer failed to protect drivers' personal information from being swept up in an industry-wide hacking campaign that targeted online auto-insurance quoting applications. According to the Agreement, some of the stolen information was then used to perpetuate unemployment benefits fraud.
Specifically, the Agreement states that Root's deficient security practices enabled bad actors to exploit vulnerabilities in Root's pre-fill feature in its auto-insurance quoting tool. The tool would input users' personal information before the user had the chance to input that data themselves, which disclosed users' full driver's license numbers and resulted in an auto-generated PDF at the end of the process. According to the Agreement, Root knew about bad actors' exploitation of the pre-fill feature in January 2021 but failed to adequately assess the breach, or its scope, and used deficient controls in its attempts to thwart later attacks.Root did not admit any wrongdoing in connection with the settlement.
In addition to penalties, the Agreement requires Root to take steps to bolster its information security practices, including by maintaining a comprehensive information security plan, developing and maintaining a data inventory of private information, and maintaining reasonable authentication procedures for access to private information, among other requirements.
Takeaway: The Agreement with Root is another in a series of settlements by the NY Attorney General, highlighting a sustained focus on cybersecurity enforcement. Although it may seem like a "blame the victim" scenario, if Root knew of the vulnerability in January 2021 and failed to act, this may have prompted the NY Attorney General's action. Post-breach, businesses must conduct thorough forensic analysis and implement recommended remediation to mitigate future liability.
Websites' Visible Privacy Statements are Sufficient Under PA Wiretapping Law
A Federal judge ruled that websites that disclose third-party data collection in their privacy statements, which a "reasonably prudent person" could see, do not violate Pennsylvania's laws against wiretapping. Judge William S. Stickman IV's opinion in Popa v. Harriet Carter Gifts, Inc. emphasizes the importance of user consent—whether expressly or implicitly—for online tracking, as well as the role privacy policies play in determining whether a user consents to such tracking.
Pennsylvania's Wiretapping and Electronic Surveillance Control Act makes it illegal to intentionally intercept, disclose, or use online communications without the consent of all parties involved. In his opinion, Judge Stickman found that the plaintiff implicitly consented to alleged interception of her data despite her insistence that she had never reviewed the defendant-website's privacy statements.
The court's application of the reasonable person standard looks to common sense; both the visitor and the website must be reasonable. With respect to the visitor, the court evaluated whether a reasonable person could have been alerted that third parties may track that person's online activity. With respect to the website, the court determined that if a privacy policy is reasonably conspicuous on a website, a visitor's consent to the policy may be implied. Here, because such policy was reasonably conspicuous, the plaintiff was deemed to have implicitly consented to the terms of the agreement and therefore was on notice of the defendant's data collection practices.
Takeaway: The court's opinion in< em>Popa v. Harriet Carter Gifts, Inc. underscores the importance of intuitive website design and prominently displayed privacy policies that clearly outline data collection practices, allowing users to review them before using the site. This case reinforces that a consumer's claim of not seeing or reading the policy is insufficient if the policy is reasonably conspicuous and written in clear language. Privacy policies must be conspicuous, straightforward, and transparent, adhering to the old adage "say what you do, do what you say."
UK Regulator Begins Enforcing Online Safety Act Codes
On March 17, 2025, Ofcom, the UK's communications regulator, began enforcing its Illegal Harms Codes of Practiceunder the UKOnline Safety Act ("OSA"), marking the first major milestone in enforcement of the OSA. The Codes were originally published in December last year, giving online service providers a three-month preparation period. The OSA applies to providers of search services and services that allow users to share content online or to interact with each other online. These codes require organizations to implement stringent safety measures to combat illegal content. Key requirements include appointing a senior executive responsible for OSA compliance, adequately funding content moderation teams, enhancing algorithmic testing to curb the spread of illegal content, and removing accounts associated with terrorist organizations. Additionally, organizations must proactively detect and eliminate child sexual exploitation and abuse material using advanced tools like automated hash-matching. However, Ofcom has indicated that it will take a risk-based approach to enforcement and that risk levels posed by a service will dictate the extent to which specific measures set out in the codes are expected to be implemented.
The OSA covers over 100,000 online services, including search engines and platforms hosting user-generated content, and addresses 130 priority offences such as child sexual abuse, terrorism and fraud. Failure to comply with the OSA's measures, including completing the risk assessment process, could result in fines of the higher of 10% of an organization's global revenue or £18 million, whichever is greater. Business disruption measures, such as blocking orders, are also on the table for more serious infringements. Ofcom has indicated its readiness to enforce these regulations and will hold further consultations to expand the codes, potentially including measures like banning accounts sharing child sexual abuse material and implementing crisis response protocols.
Takeaway: The enforcement of Ofcom's Illegal Harms Codes under the OSA represents a significant shift towards proactive regulation of online harms. Companies must now demonstrate accountability and take measures to prevent and remove illegal content. Attention will focus on Ofcom's initial enforcement actions and whether they will employ their strongest powers or adopt a more collaborative approach. Further developments are anticipated.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.