ARTICLE
12 August 2024

Privacy Pulse: CrowdStrike's Costly Software Update, PC Optimum Investigation, And Google's Database Leak

SL
Siskinds LLP

Contributor

Since 1937, Siskinds has been that firm of specialists serving individuals, families and businesses in southwestern Ontario and Canada from our offices in London, Sarnia and Quebec City. We’ve grown as the world around us has evolved. Today, we are a team of over 230 lawyers and support staff covering personal, business, personal injury and class action law and over 25 specialized practice areas.
The Siskinds Privacy, Cyber and Data Governance team is focused on providing businesses and professionals with monthly updates on technology, privacy, and artificial intelligence (A.I.)...
Worldwide Privacy
To print this article, all you need is to be registered or login on Mondaq.com.

The Siskinds Privacy, Cyber and Data Governance team is focused on providing businesses and professionals with monthly updates on technology, privacy, and artificial intelligence (A.I.) laws in both the U.S. and Canada.

For July, we have many updates to share from a global IT outage, massive privacy settlements in the US and Google ending its third-party cookie phase out. Whether you're a start-up, an established enterprise, or a sophisticated data privacy officer, our monthly updates aim to provide timely A.I., tech, and privacy articles.

Global I.T. outage caused by software update, Cookies are no longer crumbling and OpenAI's "SearchGPT"

June 7, 2024: Google's internal security incident register was leaked (which essentially documents all data breaches and potential data breaches), according to CPO Magazine. Although the leak is embarrassing for Google, this is an excellent reminder that (1) many privacy laws throughout North America require businesses to keep a registrar of security incidents; and (2) security incidents are more likely than you may think. Just because it's an "incident" does not necessarily mean that there was a breach of privacy or confidentiality, or that the incident must be reported to authorities and/or the data subjects.

July 9, 2024: The Global Privacy Enforcement Network reviewed more than 1,000 websites and apps and their privacy notices and found that "nearly all of them employed one or more deceptive design patterns that made it difficult for users to make privacy-protective decisions." For example, many of them suffered from the following issues:

  • Complex and confusing language: More than 89% of privacy policies were found to be long or use complex language suited for those with a university education.
  • Interface interference: 42% of websites and apps used emotionally charged language to influence user decisions, while 57% made the least privacy protective option the most obvious and easiest for users to select.
  • Nagging: 35% of websites and apps repeatedly asked users to reconsider their intentions.
  • Obstruction: In 40% of cases, people faced obstacles in making privacy choices (such as accessing or deleting personal information).
  • Forced action: 9% of websites and apps forced users to disclose more personal information when trying to delete their account than they had to provide when they opened it.

As a side note, as a privacy professional, I feel attacked when asked to draft "shorter" privacy notices. It's difficult to draft a "short" privacy notice when the business' operations are complex. Obviously, if a business is a lemonade stand and the only time the business collects personal information in written form is when people send comments to it through its website, then we can all agree that its privacy notice should not be a 35-page dissertation. With that being said, what we do need to be concerned about is whether the business's privacy policy concretely describes its practices, are easy to read and understand, and are located in an easy to find location.

July 19, 2024: The New York Times reports A.I. developers have traditionally used the internet's "enormous troves of text, images and videos... to train their models. Now, that data is drying up,". The linked article notes that publishers of content have "taken steps to prevent their data from being harvested," such as setting up paywalls, changing their terms of use, and/or even blocking A.I. developers' web crawlers.

I can confirm this shift is real. Many of our publishing clients are concerned about their content being harvested, and these are some of the solutions we have discussed with them. Obviously the easiest "solution" is to amend the terms of use to prohibit the harvesting, but if someone does it anyway, the business will need a practical mechanism to stop the conduct or otherwise spend tens to hundreds of thousands of dollars on litigation asking a court to stop the conduct.

July 22, 2024: According to the International Association of Privacy Professionals, Google ends its project to end its use of third-party cookies; this project has been ongoing for the past five years. With that being said, Google is still considering alternatives, and is shifting to an approach that allegedly "elevates user choice." This could look like updated web browsers and technologies that are more conducive to accepting universal opt-out or opt-in mechanisms. As a reminder, many US comprehensive privacy laws contain language surrounding universal opt-out mechanisms, especially for opting out of targeting advertising.

July 24, 2024: CrowdStrike, a cybersecurity firm, pushed an update to its software that protects businesses form cyber-attacks and disruptions. However, this update had a bug that caused "8.5m Windows machines to crash en masse." The faulty update is estimated to have cost US Fortune 500 companies $5.4 billion dollars as reported by The Guardian. Is it time to move to Linux?

July 26, 2024: OpenAI has entered the searching market, the traditional space of Google, Duck Duck Go, Bing, and Yahoo (did I forget any others?) with its selective launch of Search GPT. Search GPT will "provide summarized search results with source links in response to user queries", according to Reuters. This sounds like how Bing AI currently works. I'm looking forward to testing it.

July 29, 2024: Consumer Reports published an article on the importance for manufacturers of internet-connected devises to have cybersecurity vulnerability disclosure programs. Essentially, if someone finds a product security vulnerability, companies should have an effective means for that vulnerability to be quickly reported to those that have the power to fix such vulnerability. The advantage of such a procedure is that if a business' consumers discovered a zero-day exploit, the consumers could report that exploit to the business, which would enable the business to rapidly fix the exploit before malicious actors are able to take advantage of the exploit.

Canada: Investigation into PC Optimum and inappropriate access to patients' eMR

July 1, 2024: Quebec's Act respecting health and social services information and amending various legislative provisionscame into force along with two regulations. Essentially, this act introduces a comprehensive privacy framework for the processing of Quebecer's health and social services information (HSS).

July 24, 2024: According to the CBC, the Canadian Office of the Privacy Commissioner has opened an investigation into "allegations that some Loblaw customers have been unable to delete their PC Optimum accounts." Although the Personal Information Protection and Electronic Documents Act (Canada) ("PIPEDA") does not provide a "right to delete", as does the European General Privacy Regulation and many US comprehensive privacy laws, it does state that "Personal information shall not be used or disclosed for purposes other than those for which it was collected, except with the consent of the individual or as required by law. Personal information shall be retained only as long as necessary for the fulfilment of those purposes" [emphasis added]. See Principle 5 of PIPEDA.

July 25, 2024: According to the CBC, a pediatrician at Windsor Regional Hospital inappropriately accessed individuals' electronic medical records. If you are a Health Information Custodian under the Personal Health Information Protection Act (Ontario), remember that any unauthorized use or disclosure to a patient's electronic medical record ("eMR") generally requires a breach report to the Ontario Information and Privacy Commissioner and to the patient.

United States: TikTok in the news and the USPS shared personal information with social networking companies

June, 2024: TikTok changed its advertising policy and now stops advertisers from being able to target children and teens in certain circumstances. See TikTok's Business Help Center.

July 1, 2024: The U.S. Department of Health and Human Services' (HHS) Office for Civil Rights (OCR) settled with Heritage Valley Health System after alleged violations of HIPAA's Security Rule, which resulted in a ransomware attack. OCR alleges that Heritage Valley failed to "conduct a compliant risk analysis to determine the potential risks and vulnerabilities to electronic protected health information in its systems; implement a contingency plan to respond to emergencies, like a ransomware attack, that damage systems that contain electronic protected health information; and implement policies and procedures to allow only authorized users access to electronic protected health information."

Under the terms of the settlement, Heritage Valley agreed to pay $950,000 and implement a corrective action plan that will be monitored by OCR for three years.

July 17, 2024: The US Postal Service apparently was "sharing the postal addresses of its online customers with . . . Meta, LinkedIn and Snap," according to TechCrunch, through the tracking pixels technology employed on its website. The USPS has since allegedly remediated the issue. This reminds me back in late 2023 when it was reported in Canadian media that Canada Post was creating mail marketing lists, which were being rented to businesses without consumer consent.

July 25, 2024: The US Department of Justice (DOJ) unsealed an indictment against a North Korean for carrying out ransomware attacks against healthcare facilities, according to the Hacker News.

August 2, 2024: The U.S. DOJ and the Federal Trade Commission (FTC) filed a lawsuit against TikTok and ByteDance (the parent corp.) for "failing to protect children's privacy on the social media app," as reported by Reuters. See the Complaint here. The DOJ and FTC start the Complaint alleging that,

"[f]or years, [the] Defendants have knowingly allowed children under 13 to create and use TikTok accounts without their parents' knowledge or consent, have collected extensive data from those children, and have failed to comply with parents' requests to delete their children's accounts and personal information.

Defendants' conduct violates the Children's Online Privacy Protection Act of 1998 ("COPPA") and Children's Online Privacy Protection Rule ("Rule" or "COPPA Rule"), a federal statute and regulations that protect children's privacy and safety online. It also defies an order that this Court entered in 2019 to resolve a lawsuit in which the United States alleged that TikTok Inc.'s and TikTok Ltd.'s predecessor companies similarly violated COPPA and the COPPA Rule by allowing children to create and access accounts without their parents' knowledge or consent, collecting data from those children, and failing to comply with parents' requests to delete their children's accounts and information."

If you recall from my last July Privacy Pulse, on June 18, 2024, the FTC referred a complaint to DOJ. I'm looking forward to this boxing match.

U.S. Federal Trade Commission (FTC) enforcement: complaint against NGL Labs and hashing data is not anonymized data

July 2, 2024: The U.S. FTC brought a complaint against Arise Virtual Solutions Inc. alleging unfair and deceptive practices by making misleading and unsubstantiated earnings claims when Arise sold business opportunities to consumers seeking to work from home in customer service, and for violating the FTC's Business Opportunity Rule by failing to provide the necessary disclosures.

For example, Arise promised that its recruits could "quickly and easily start their own customer support business, be their own boss, set their own schedule, and earn up to $18 per hour." Yet, after "enrolling in [Arise's] business opportunity, they must invest hundreds of dollars of their own money in upfront and ongoing costs, including equipment purchases as well as fees for mandatory certification courses, background checks, and so-called 'platform usage fees.'... Even without factoring in the significant start-up and recurring costs associated with [Arise's] business opportunity, the vast majority of consumers who invest fail to earn the promised hourly rate of $18... In truth, 99.9% of Defendant's workers earned an hourly base pay of less than $18 from 2019 to 2022."

July 9, 2024: The FTC commenced an action against Start Connecting LLC and Start Connecting SAS doing business as USA Student Debt Relief and corporate officers Douglas R. Goodman, Doris E. Gallon-Goodman, and Juan S. Rojas. USA Student Debt Relief is a company that allegedly "systematically deceive[s] financially strapped consumers into paying hundreds of dollars for the false promise of student loan forgiveness." The FTC alleged the following: First, the Defendants mislead consumers into thinking they were affiliated with the Department of Education and were then able to convince consumers to receive access to the consumer's Federal Student Aid accounts during telemarketing calls.

Second, the Defendants, after gaining access to the student aid accounts, represented to consumers that the consumer qualified for the Defendants' loan forgiveness program (which had a name resembling the official program by the Department of Education), even if the consumer actually didn't qualify. The Defendants would then make up quotes for the consumers monthly payments going forward.

Third, the Defendants would require the consumer to pay an advance fee (often $400 to $1,200) to lock in that quote. Many consumers were misled into thinking that the advance fee applied to their loan balance. Additionally, this practice contradicted the Defendant's advertisements, which stated, "No fees until you settle your account."

Fourth, the Defendants would provide quotes of low monthly payments (e.g., $9, $19, or $29 monthly) and represented that these payments were the "consumers' monthly loan payments when [the Defendants'] actually are simply pocketing the money themselves."

Fifth, the Defendants would reinforce their representations by "posting fake reviews and testimonials to their website and social media profiles." These fake reviews used stock photos of people with assigned fictional names.

Sixth, the Defendants violated the Telemarketing Sales Rule by calling numbers on the National Do Not Call Registry and misrepresenting material information during the telemarketing calls.

Lastly, the Defendants violated the Gramm-Leach-Bliley Act by using false statements to obtain customer information of financial institutions.

The message is loud and clear for businesses: don't deceive you customers; don't make up testimonials; and don't violate the Do Not Call rules, regardless of the side of the border you operate on (both Canada and the US have their own rules).

July 9, 2024: The FTC and the District Attorney of Los Angeles (LA DA) brought a complaint against NGL Labs, LLC alleging violations of the FTC Act, the Restore Online Shoppers' Confidence Act, and the Children's Online Privacy Protection Rule. The LA DA is bringing an action under the California Unfair Competition Law and the False Advertising Law.

As some of you are aware, NGL stands for "not gonna lie", and the NGL App allows people to share anonymous messages to people who post an NGL link on their social media.

Where does the FTC and the LA DA allege NGL went wrong:

  • First, after an anonymous message is sent, the consumer receives a notification that they have a new anonymous message. Below the message are two buttons: one that the consumer may push to find out "who sent this", and a second to reply. If the consumer pushes the first button, they are invited to pay for NGL Pro with a popup. In grey font, there is text that explained that pro members would only receive hints on the identity of the sender. Additionally, some of the hints were false. This is deceptive.
  • Second, subscribing to NGL Pro had even smaller, fainter text that read, "pro renews for $6.99/week." Many consumers thought upgrading to Pro was a one-time cost, not a weekly subscription.
  • Third, the NGL App sent fake anonymous questions to consumers, representing that these questions were from people on their social media feeds.
  • Fourth, NGL represented its App to be safe for children, and that it uses "world class AI content moderation", which includes "deep learning and rule-based character pattern-matching algorithms . . . [in order to] filter out harmful language and bullying." NGL further represented that the App can "detect and semantic meaning of emojis, and... pull... specific examples of contextual emoji use [allowing them to]... stay on trend... understand lingo, and... know how to filter out harmful messages." The FTC and the LA DA allege that these representations are not true.
  • Fifth NGL collects personal information from children under the age of 13 in violation of COPPA by not notifying parents or obtaining verifiable parental consent.

Importantly, for businesses, the most efficient means to learn what not to do is to review what others have done that got them in trouble. Do not lie; do not deceive; be transparent about how your product or service works; ensure that your subscription terms are clear and are found in a conspicuous location; and, if you are collecting personal information, ensure you follow privacy law.

July 24, 2024: The FTC reminds businesses that hashing data does not make it anonymous. "While hashing might obscure how a user identifier appears, it still creates a unique signature that can track a person or device over time." As you may know, to anonymize personal information, you need to make the information impossible to be associated with an identifiable person.

U.S. Federal Communications Commission (FCC) Enforcement: CaptionCall and TracFone Wireless

July 9, 2024: The FCC announced a $34.6 million dollar settlement with CaptionCall to resolve "an investigation into the company's unlawful retention of call content beyond the duration of a call and submission of inaccurate information to the Telecommunications Relay Service Fund Administrator." As part of the settlement, CaptionCall is to "conduct a data inventory, implement a data retentions schedule, and invest in measures such as privacy-enhancing technologies (PETs) and education resources for consumers."

July 22, 2024: The FCC also announced a $16 million dollar settlement with TracFone Wireless, a subsidiary of Verizon Communications, to resolve an investigation into whether TracFone "failed to reasonably protect its customers' information from unauthorized access in connection with three data breaches[, which]... resulted in the unauthorized access to and exposure of customers' proprietary information (PI), including certain customer proprietary network information (known as CPNI) and personally identifiable information (known as PII)..."

As noted in the FC press release, "[t]he failure to reasonably secure customers' proprietary information violates a carrier's duty under Section 222 of the Communications Act and also constitutes an unjust and unreasonable practice in violation of Section 201 of the Act. It is also a violation of Section 222 of the Communications Act to impermissibly use, disclose, or permit access to individually identifiable CPNI without customer approval."

New York: Department of Financial Services provides A.I. guidance for insurers and Attorney General launches two privacy guides

July 11, 2024: New York's Department of Financial Services adopted a guidance for insurers to ensure the use of A.I, especially when used during underwriting and pricing, does not "perpetuate or amplify systemic biases that have resulted in unlawful or unfair discrimination".

July 30, 2024: New York Attorney General launched two privacy guides: a Business Guide to Website Privacy Controls and a Consumer Guide to Tracking on the Web. Remember, just because a jurisdiction does not have a "comprehensive privacy law" does not mean there are "no" privacy laws. Privacy practices are still subject to consumer protection laws that prohibit unfair or deceptive acts or practices (hence the FTC being seen as the chief privacy regulator in the US).

Texas: $1.4 Billion dollar settlement with Facebook

July 31, 2024: Meta Platforms (formerly Facebook) agreed to pay Texas $1.4 billion dollars to settle a dispute alleging that Metal "illegally [used] facial-recognition technology to collect biometric data of millions of Texans without their consent."

U.S. class actions and data breaches: Intuit, Advance Auto Parts and Patagonia, Inc.

July 8, 2024: A class action against Intuit was filed for failing to adequately safeguard sensitive data when the date was compromised in the TurboTax and Credit Karma data breach in March of 2024. See Joseph Garite v. Intuit Inc., Case No. 5:24-cv-03960 (ND. California 2024).

July 11, 2024: Advance Auto Parts says more than 2.3 million people had their personal information breached (including their names, social security numbers, driver's license, and date of birth), as reported by The Record. It appears the breach occurred during the May breach suffered by data storage firm Snowflake.

On the same day, a class action lawsuit was filed against Patagonia, Inc., an outdoor clothing and gear company. The complaint alleges that Patagonia uses a service provider (Talkdesk, Inc.) that records and analyzes customer-service calls. Talkdesks' software essentially intercepts the call, listens to it, records it, and analyzes the call's content. Talkdesk uses A.I. to analyze the "callers' words to determine what the caller is talking about and how the caller is feeling." The complaint alleges that neither Talkdesk nor Patagonia disclosed this collection and use of personal information to the customer, and neither did they receive any consent from the customer.

The complaint also alleges that Talkdesk used the personal information collected for its own purposes as well. If true, this could be problematic because Talkdesk's A.I. could be trained to associate a certain identifiable individual with a certain "feeling", which could prejudice that identifiable individual when they interact with other Talkdesk clients who use that same A.I. software.

July 13, 2024: AT&T disclosed that they had a data breach affecting almost all of its wireless customers (including yours truly). A part of the breached data appears to be "cell site identification numbers, potentially allowing the threat actors to triangulate the approximate location of a customer when a call was made or a text message was sent," as reported by the Hacker News. Fortunately, AT&T, in its Securities and Exchange Commission (SEC) Form 8-K filing dated May 6, 2024, said that the breached data did not include "the content of calls or texts, personal information such as Social Security numbers, dates of birth, or other personally identifiable information." The next day, the Verge reported that AT&T paid $370,000 in bitcoin to the hacker to "delete [the] customer data that was stolen from it." Apparently, the hacker sent AT&T a video confirming the deletion of the information.

It's always a good idea to ensure your calls or texts are encrypted locally before going out over the provider's network, and then decrypted locally on the recipient's device. That way, if a breach impact the content of calls or texts, your calls or texts would still be encrypted. For texts, I personally use Signal.

FYI, it appears a lawsuit against AT&T has been commenced in the Northern District of Texas. If you're interested in reading the complaint, you may find it here: 3:24-cv-01797.

July 19, 2024: According to Reuters, Oracle may be settling a class action law suit for $115 million dollars, which alleged that Oracle collected, without authorization, personal information of "hundreds of millions of people"; the information Oracle allegedly collected included "where people browsed online, and where they did their banking, bought gas, dined out, shopped and used their credit cards." Oracle allegedly "sold the information directly to marketers."

Prioritize your privacy: The path to compliance starts here

Establishing a comprehensive privacy program is essential for all types of businesses. At Siskinds, our Privacy Concierge program offers custom subscription programs.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More