Smartphones have given people a virtual window to the world. As it turns out, this window works both ways, as these small devices also have the capability of disclosing their owner’s whereabouts, such as visits to abortion clinics. The issue of consumer privacy was recently litigated in an action brought by the Federal Trade Commission (“FTC”) against a broker of mobile app data (“Data-Broker”). 

The FTC accused the Data-Broker of using people’s digital footprints without their consent, thereby providing the Data-Broker’s customers (mostly advertisers) with unauthorized access to sensitive information. The FTC argued that the breadth of the geographical information the Data-Broker gathered and re-distributed without the consumer’s consent could reveal the most private details of the consumer’s life, such as their medical history, sexual orientation, or religious beliefs. In the case of visits to abortion clinics, this information could expose the individual to criminal charges if the procedure were outlawed in their state of residence. The collected information could also facilitate stalking, stigmatization, and discriminatory practices. 

In response, the Data-Broker argued that the FTC had not articulated a legitimate claim because they had failed to show that the harm to people’s privacy was imminent. Simply put, the Data Broker claimed that the FTC was exaggerating, and even suggested that consumers could just shut down their phones if they did not want their personal information to be misused. Moreover, the Data Broker, contended that it did not broadcast patient health information. Rather, it merely revealed an imprecise geolocation, suggesting that a given consumer could have visited a variety of medical or other establishments in the vicinity.

During the hearing, the FTC sought to impeach the credibility of the Data-Broker by mentioning that the company itself had marked the information it was selling as sensitive. A federal judge agreed with the FTC, denying the Data-Broker’s motion to dismiss, and permitting the FTC to move forward with its case.


The Takeaway

Sometimes, the law has a hard time keeping pace with technological progress, which can lead to security and privacy concerns. By setting clear guidelines to qualify for coverage, cyber insurers are in a unique position to foster best practices among businesses. One-way insurers accomplish this is by excluding coverage for intentional invasions of consumer privacy. Although defense costs are typically covered for unproven allegations, the fact that damages for such claims are uninsurable should discourage businesses from selling such information. While cyber insurance covers the inadvertent disclosure of personal information, businesses that profit off of deliberate violations of privacy will find themselves on their own.


Great Am. Fidelity Ins. Co. v. Stout Risius Ross, Inc., 2022 WL 16571316 (E.D. Mich. Nov. 1, 2022)

A Michigan court held that an “implied-in-fact” contract requires the insureds to reimburse defense costs to the insurer if it is determined that the insurer had no duty to defend a matter at issue. According to the court, that is true even in situations when the policy at issue is silent on recoupment of defense costs. Under a reservation of rights, the insurer had agreed to defend the insured, a financial advisory firm in litigation involving the insured’s valuation services for an Employee Stock Ownership Plan. During motion practice, the court held that the insurer no longer had the duty to defend the underlying litigation. As a result, the insurer requested that the court award it the defense costs paid to the insured through the ruling on the duty to defend. 

In its initial coverage letter, the insurer agreed to provide a defense under a full reservation of rights. Notably, the insurer also asserted a right to seek reimbursement of defense costs should it be later determined that it had no obligation to defend the matter. The underlying complaint against the insured was amended, such that the only remaining charge against the insured was for “Federal Securities Fraud.” The insurer argued that an exclusion in the policy barred coverage for Securities Law Violations. As this was the only remaining claim against the insured in the underlying action, the court sided with the insurer, and found that the insurer owed no duty to defend the insured. The insurance carrier then sought reimbursement for defense costs. 

The insurer argued that it was entitled to reimbursement of the defense costs it expended when it did not have a duty to defend the insured based on a theory of implied-in-fact contract or unjust enrichment. The court found that the parties had entered both an implied-in-fact and an implied-in-law contract for reimbursement of defense costs. Despite the policy being silent on the issue, the court found that an insurer was entitled to reimbursement under an implied-in-fact contract where the insurer: 1) timely and explicitly reserved its rights to reimbursement; and 2) provided sufficient notice of the specific possibility of reimbursement. 

The Takeaway

From an insured’s perspective, this decision raises concerns about the benefits of the carrier agreeing to defend a matter under a full reservation of rights when the duty to defend can still be nullified by developments in the pleadings. Generally, the policy has terms to address such situations. Yet, when it does not, the insured may be on the hook to reimburse defense costs regardless of whether the carrier sufficiently reserved its rights. 


Merck & Co. Inc. et al v. Ace American Insurance Co. et al., No. A-001879-21-T02 (N.J. App. 2023)

A New Jersey appeals court heard oral arguments in a coverage dispute arising out of the 2017 NotPetya cyberattack that consisted of a strain of malware which cybersecurity experts have attributed to the Russian government. The NotPetya attack is believed to have originated in Russia with Ukraine as its intended target, but it ricocheted around the world hitting businesses across borders and industries. In this case, a pharmaceutical company sought coverage under its property policy for losses it sustained following the attack.

The insurers denied coverage on grounds that the policy’s exclusion for “hostile or warlike action” operated to preclude claims arising out of any state sponsored attack, including those which take the form of cyber warfare. The insurers have acknowledged that the pharmaceutical company was merely “collateral damage” in the attack but contended that the property policy’s war exclusion applied in situations such as this.


In response, the pharmaceutical company noted that the property policy at issue was written on an “all risk” basis. The company also sought to draw a distinction between traditional and cyber warfare, noting that the latter was not explicitly referenced in the policy’s exclusion and maintaining that the use of the term “war” connotes “the deployment of armed forces against an enemy.” 


The Takeaway

Regardless of the ultimate outcome, this is yet another example of the coverage disputes which may arise when policyholders attempt to rely on traditional property and casualty policies for coverage in cyber matters. So-called non-affirmative or “silent cyber” coverage has become increasingly difficult to trigger as insurers tightened policy wording to explicitly exclude such losses. A dedicated cyber insurance policy with a narrowly tailored war exclusion would be the insured’s best hope of triggering coverage for such a claim.


Westron et al v. Zoom Video Commc’ns, Inc., No. 22-cv-03147-YGR (N.D. Cal. Feb. 15, 2023)  

A California judge dismissed a claim by consumers against a cloud conferencing company (the “Company”), which had alleged, among other things, invasion of privacy. The Company was accused of collecting consumers’ personal data and sharing it with Google, Facebook, and other platforms without the consent of the consumers. 

The court explained that to plausibly argue that the Company violated their privacy rights, consumers must show: “(1) a legally protected privacy interest; (2) a reasonable expectation of privacy in the circumstances; and (3) conduct by [the Company] constituting a serious invasion of privacy.” The law protects two types of privacy interests: (a) an interest in preventing the widespread use or misuse of sensitive and confidential information; and (b) an interest in maintaining an autonomous private life without outside observation, intrusion, or interference. 

In reviewing the consumers’ allegations, the court explained that alleging a defendant had the ability to invade one’s privacy was different from alleging that one’s privacy was, in fact, invaded. Here, the consumers did not allege that the Company actually shared their personal activities. Rather, they alleged that the Company shared other people’s sensitive data, not their own. Therefore, the court ruled that it was unable to determine whether the consumers’ sensitive or confidential information was compromised. 

Nevertheless, the court left the door open for the consumers to bring other claims against the Company, because the Company did not demonstrate that it would be prejudiced by further claims. 


The Takeaway

This case suggests that courts are likely to closely evaluate facts in cases alleging privacy-violations against companies. Unless the allegation of privacy violations come with specificity that enables a court to evaluate the nature and effect of the wrongdoing, courts will likely continue dismissing such claims. Yet, given the increased consumer awareness about the importance and value of their data, businesses should review their practices of storing and maintaining consumer data to minimize the risk of future litigation. 


Firemen’s Ret. Sys. of St. Louis v. Telos Corp., et al., No. 1:22-cv-00135 (E.D. Va. Feb. 1, 2023) 

A federal judge dismissed an investor lawsuit against a cybersecurity company (the “Company”). The investors had alleged that corporate executives misled shareholders about the firm’s prospects of winning government contracts, and the timing of any such new business opportunities.


The judge concluded that none of the statements or omissions cited by the investors in their proposed class action rose to the level of reckless disregard for the truth necessary to plead a violation of the law. Instead, any such declarations by the Company’s executives amounted to puffery or “forward-looking statements” protected under federal securities law. The judge also ruled that the investors had failed to properly allege that the Company or its executives had a culpable mindset, or “scienter.” In other words, based on the allegations contained in the complaint, there was not enough to infer that the Company or its board knew that the statements they made (or failed to make) were false or misleading.


Specifically, the judge found no evidence that the Company or its executives knew the government contracts in question would be delayed. The judge further noted that, the mere fact that the Company and its executives were wrong in some of their outlooks and expectations does not establish the required inference of scienter. In the judge’s view, even the sale of stock by some of the Company’s executives following an initial public offering did not suggest misconduct on the part of corporate executives. Subsequent drops in share prices during the period at issue could be attributed to other causes, such as the impact of the COVID-19 pandemic on business or changes in government spending due to cyberattacks against federal agencies. Taking the pleadings at face value, the judge stated that the company and its officers could have perhaps acted negligently, but not fraudulently or recklessly. However, the judge also denied the investors’ attempt to further amend their complaint, saying that they had failed to proffer any new allegations that would yield a different result.


The Office of the Attorney General of the State of New York (the “Office”) initiated an investigation under the Executive Law and General Business Law of the Spyware Company which offered a mobile phone monitoring service program (the “App”). Under color of loyalty, the App solicited its users by playing upon their insecurities and inviting people to stalk their other halves to prevent cheating in romantic relationships.

The App copied information from the victim’s device and transmitted it to the servers of the Spyware Company servers. The breadth of information the App collected included call logs, text messages, camera images and videos, location, mail data, messages on messaging platforms, data from the most popular social media platforms, and browser histories. The App’s ever-present tentacles hacked the default operating systems of the phone in a way that renders virtually any manufacturer’s warranty for such devices invalid. Ironically, the App aimed to protect the privacy of its users—it allowed its clients to hide its icon from the home screen, while its support team diligently walked its consumers through the process of protecting their own anonymity in the process of depleting the victims of theirs. 

The Office pointed to the incalculable wrongdoings committed by the Spyware Company, such as misrepresenting the legal risk of using the product, providing insufficient disclaimers, concealing their affiliation with the third-party review sites, and presenting fabricated reviews as objective, misleading consumers regarding their refund policy, and overstating the security of the App. 

Finally, in a deal reached with the owner of the Spyware Company, the Office demanded that the Spyware Company provide notices to the victims of cyber-stalking about the data breach, disclose the nature of the privacy violations that took place, and damages to be paid for unethical sale practices and promotion of illegal privacy invasions.


The Takeaway

This company’s destiny suggests to business owners that building businesses based on illegal and inherently unethical activities may be a shaky harbor for talent capable of building such a sophisticated software. For cyber-insurance, this matter likely means that carriers will learn from the breadth of violations that software-businesses are capable of committing and will broaden their conduct exclusions accordingly. Moreover, carriers may use this matter as a lesson to fully exclude or underwrite around data breaches arising out of unauthorized access to tangible property of consumers, such as phones. 


Cothron v. White Castle Sys., 2023 IL 128004, 2023 Ill. LEXIS 146 (Supreme Court of Illinois, February 17, 2023)

In a 4-3 decision, the Illinois Supreme Court determined that a cause of action under the Biometric Information Privacy Act (“BIPA”) accrues every time biometric data is collected or disclosed.


In the underlying matter, an employee asserted that her employer did not seek her consent to acquire her fingerprint biometric data until 2018, more than a decade after the Act took effect. Accordingly, the employee claimed that her employer unlawfully collected her biometric data (through fingerprint scans) and unlawfully disclosed her data to third-party vendor in violation of BIPA’s requirements of notice, consent, and disclosure of that information. The question before the court was whether a cause of action accrues only once, when the biometric information is initially collected or disclosed, or whether it accrues each time biometric data is scanned and disclosed.


The employer argued that unauthorized collection or disclosure of biometric data can only happen once, thus creating a single cause of action the first time the violations occur. The majority opinion took a strict interpretation of the statutory language of BIPA. The court held that the plain meaning found in the language of BIBPA establishes a cause of action occurs each and every capture of an employee’s fingerprint scan, as well as for each instance the employer transmits such information to a third-party. Recognizing the potential to create astronomical exposure to an employer, the majority stated that the trial court has discretion to fashion a damage award that fairly compensates the class members without destroying the defendant business. Ultimately, the court left the matter in the hands of the legislature to clarify its intent with the law. 


The dissenting opinion focused on the fact that the legislature, in enacting BIPA, could not have possibly intended to create such a draconian penalty for businesses by holding an employer liable for each instance where a fingerprint is scanned. The dissent also raised two important consequences of this decision: (1) similar employees would be incentivized to delay brining their claims as long as possible to increase their recovery potential; and (2) the statute allows for damages for up to $5,000 per violation. For a company with many employees that scan their fingerprints frequently, this could lead to staggering damages awards. In this instance, such an award could exceed $17 billion. 

The Takeaway

While the court did not determine that each accrual would lead to a separate damage award, this created significant potential exposure for businesses using biometric identifiers. Unless the legislature clarifies its intent or amends the law, businesses should attempt to resolve BIPA claims early in the litigation process and proactively work to become BIPA-compliant in disclosure and consent policies prior to rolling out the use of biometric devices. In light of this ruling, the potential costs associated with BIPA non-compliance may outweigh the benefits of using biometric identifiers in the workplace.