Following a congressional report of a data breach involving taxpayer information, a reputable attorney filed a lawsuit against a tax preparation firm and tech giant companies (collectively, the “Companies”) for their failure to alert consumers about the sale of their data. According to the allegations, such personal data was collected through “spy cams” or tracking pixels and was utilized to build programs that deceived consumers.
The Companies were alleged to have worked together to implement tracking pixels on the tax preparing Company’s website, which exposed sensitive tax-related information of consumers. The data collected through pixels revealed sensitive Tax Return Information (“TRI”) about visitors. Such information included names, social security numbers, addresses, adjusted gross income, dependents’ information, dates of birth, health savings, education expenses, and much more—data that prudent people would ordinarily avoid sharing on the social media platforms of giant tech companies.
The Federal Trade Commission (the “FTC”) warned the tax preparation firm that failure to obtain consumer consent before using their data for advertising would be penalized. Now, the lawsuit in question was filed under the Racketeer Influenced and Corrupt Organizations Act (“RICO”), a statute that usually applies to organized crime. In addition to RICO violations, counts of violation of the Internal Revenue Code, Federal Wiretap Act, and California Privacy Act were cited.
The Companies argued that they implemented efforts to filter sensitive data, such as bank accounts, social security numbers, and contact information. Yet, the investigations have revealed that such filtering mechanisms appear to have been flawed.
While pixel tracking remains a major issue in cybersecurity litigation, the implications of this lawsuit are expected to further the argument for the passage of additional laws surrounding the use of consumer data.
According to a recent report by the Identity Theft Resource Center (“ITRC”), 2023 has already broken the annual record for data breaches. The ITRC reports that the first three quarters of the 2023 year saw 2,116 data breaches, surpassing the all-time recorded high of 1,862 which was set in 2021. Cyberattacks continued to be the most frequently reported root cause of data breaches in the third quarter (“Q3”). Nearly 10% of these cyberattacks were the result of zero-day vulnerabilities, which have proven to be extremely difficult to detect and repel against.
Of the reported data breaches, 344 organizations have been impacted by their use of a vulnerable MOVEit product. The MOVEit file transfer vulnerability was the cause of four out of the top ten breaches in terms of records disclosed. Between organizations directly using the MOVEit software and those relying upon a vendor who utilized the product, over 400 companies have been impacted by data breaches.
With only two months left in the year, and 2023 already exceeding past year’s data breaches, there should be a sense of caution. The strongest defenses cannot guarantee that a business will not suffer from a breach event. So long as there are software vulnerabilities, like those seen with the MOVEit products, hackers will continue to exploit those weaknesses. To properly protect itself from data breaches, organizations will need two business processes in place to help manage this risk. This two-step-process includes creating an up-to-date Incident Response Plan, to ensure that a company has a strategy for breach events and that all businesses need a risk transfer tool in the form of Cyber Insurance.
Recently, California Governor, Gavin Newsom, signed the “Delete Act” (the “Act”) into law. Under this new law, Californians will now have the opportunity to have their personal data deleted from data brokers’ records. The Act is the first of its kind to be passed in the United States and requires data brokers to be registered with the California Privacy Protection Agency (the “Agency.”) Once registered, the Agency can process California residents’ requests to have data brokers delete personal information, regardless of how the brokers originally came into possession of the information. The Act makes such requests easy, creating a “one-stop shop” for Californians concerned about their privacy and personal data including, but not limited to, their sexual health information, geolocation status, and religious affiliation.
The Act is a thorn in the side of many advertising agencies, who argue that the Act makes it much harder for consumers to learn about and purchase new products and services. Advertisers will have a few years to adjust to the Act, as the state agency charged with administering this law has until 2026 to set up an actual mechanism for California residents to have their personal information records deleted.
California’s passage of the Act is just one of the many efforts made by the states to ramp up protection for their respective residents’ personal data. Those doing business in California are not only encouraged to ensure that their business practices will comply with the Act but should also refer to their cyber insurance policies to grasp how to respond in the event of a privacy regulatory proceeding.
President Biden recently issued a landmark Executive Order (the “Order”), paving the way for managing the risks of artificial intelligence (“AI”). The Order establishes new standards for AI safety and security to protect Americans’ privacy, promote innovation, and spur competition. With this Order, President Biden directed the most sweeping actions ever taken to protect Americans from the potential risks of AI systems.
Specifically, the Order requires companies that are developing AI models that pose a serious risk to national security, the nation’s economy, or public health to notify the government when they train the model. Once the model is trained, results of all red-team safety tests must also be shared with the federal government, ensuring that these systems are safe, secure, and trustworthy before they are made public.
Additionally, the Order directs the National Institute of Standards (“NIST”) to develop the standards for testing. The Department of Homeland Security will apply the standards developed by NIST to critical infrastructure sectors and establish an AI Safety and Security Board. Standards testing can pose the risk of enforcement exposure for AI companies in addition to potential Error and Omissions (“E&O”) claims arising out of the potential negligence in rendering testing services.
Further, the Order directs the Department of Commerce to establish standards and best practices for detecting AI content, with watermarking to label such content as AI- generated. This will ensure that the public can distinguish between authentic government communications and imposters. These watermark labels may also be a great tool for cracking down on copyright infringement, misappropriation of images, and others that may potentially fall under the definition of a wrongful act in a Media Liability policy. The Order also calls for Congress to pass bipartisan data privacy legislation and federal support for the development of cryptography, which will preserve Americans’ right to privacy within the context of harvesting data for the purpose of training AI models. Failure to adhere to these federal requirements may not only result in a privacy liability claim but may also trigger coverage under an applicable Cyber Policy.
Lastly, the Order states that the federal government will provide guidance to landlords, federal contractors, and employers in the use of AI in a non-discriminatory fashion and ensure fairness throughout the criminal justice system. The Order proposes the development of best practices on the use of AI in sentencing, parole/probation, and risk assessments, among many other applications.