A recent decision of the Irish Data Protection Authority imposed a 1.2B Euro fine against the parent company of a large technology company (the "Company") for violating the General Data Protection Regulation, or GDPR. This was the largest fine ever assessed under the GDPR, and it was for conduct which the Chair of the European Data Protection Board described as “systematic, repetitive, and continuous.” The Company has expressed disappointment in the decision and is exploring avenues of appeal.

But the fine only tells half the story. The decision also includes an order that the Company ceases the processing and storage of European users’ data within the next six months. This type of injunctive relief should concern every American company conducting business in Europe and gathering data on European consumers. The prospect of a data transfer ban is designed to pressure U.S. negotiators to resolve the question of how American firms handle such data, and the resulting agreement will impact cross-border transactions for years to come. 

THE STORY BEHIND THE STORY: In the wake of Edward Snowden’s revelations about the ways in which Washington gathers data on citizens of other countries, E.U. officials have come to think of American military intelligence the same way the U.S. thinks of China’s security apparatus. Accordingly, E.U. officials view The Company as a privacy threat to their citizens just as some in Congress view TikTok as a pipeline of Americans’ personal data to Beijing. As we saw during the pandemic, Big Tech has generally cooperated with government efforts to flag or even de-platform those espousing unpopular opinions. And Europeans may like their social media, but they seem to value their privacy even more. 

WHAT SHOULD BUSINESSES BE DOING? With the proliferation of state, federal, and foreign data protection standards, companies should engage privacy counsel to craft policies that will help them stay in compliance. They should also check their Cyber insurance policy to confirm that it responds to any privacy regulatory proceeding, not just those precipitated by a breach event, and that it covers fines and penalties wherever insurable by law, with that question being determined by the law of the jurisdiction most favorable to the Insured.



The Biden administration is taking an active role in the development of Artificial Intelligence (AI), noting that while offering extraordinary benefits, this technology has the potential to “threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent.”  In the interest of balancing progress with civil rights, the White House Office of Science and Technology Policy has identified five principles to guide the design and use of automated systems. This “Blueprint for an AI Bill of Rights” is intended to protect citizens and ensure that this technology is deployed responsibly. 


The five elements of this proposed AI Bill of Rights include:
  • Safe and Effective Systems: Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system.
  • Algorithmic Discrimination Protections: Systems should be used and designed in an equitable way. They should not contribute to unjustified treatment based on race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. 
  • Data Privacy: Individuals should not be subjected to abusive data practices, and citizens should have control over how their data is used. Systems should be designed to ensure that only data strictly necessary for the task at hand is collected. Developers should also minimize the potential impact of surveillance technologies on human rights.
  • Notice and Explanation:  Individuals should be notified that an automated system is being used and understand how the results impact them. These notifications should be explained in plain language and should make clear when the automated system is not the sole factor determining the outcome. 
  • Human Alternatives, Consideration, and Fallback: Individuals should be able to opt out, where appropriate, and have access to a live person who can address any issues a user encounters with AI. Alternatives should be readily accessible and protect the public from especially harmful impacts. In some cases, a human alternative may be required by law. There should also be an escalation process if the automated system fails or produces an undesirable impact on the individual.

The Takeaway

Considered together, the five principles of the Blueprint for an AI Bill of Rights form an overlapping set of backstops against potential harms. Although this framework is subject to change as it becomes codified into law, businesses would be well-advised to begin incorporating the security and privacy elements outlined here into the design of their AI service offerings to minimize any potential liability in the future.


Constr. Indus. Laborers Pension Fund v. Bingle, 2023 Del. LEXIS 154 (May 17, 2023) 

One of the most popular social media platforms (the “Company”) found its confidential source code posted on an online collaboration platform for software developers. While the country’s concerns about the security of social media users’ data have been on the rise, this leak is also a major exposure of intellectual property for the Company. The Company reached out to the software platform, pointing to copyright infringement issues, and asked them to take down its code. The software platform complied; however, how long the leaked code was online remains unclear. 


In the months leading up to this event, the Company faced drastic leadership changes, followed by mass layoffs that affected 75% of employees. The stories of such layoffs received a lot of attention from the public. The Company suspects that one of its former employees was to blame for the leak and asked a Federal Court to order the software platform to reveal the identity of the person who shared the code, as well as any other individuals who downloaded it. 


This incident also serves as a reminder for employers about the sensitivity of employee departures. This incident demonstrates that employees could take advantage of the sensitive data of a business, especially when they are leaving a given company on less-than-great terms. Although such coordination is easier said than done, the IT, HR, and Legal departments need to work closely during times of such turmoil, to ensure the security of the offboarding process. That is especially true when the departing party is as sophisticated as one of the company’s technical engineers. The HR and Legal departments should be ready to remind all departing employees of any non-disclosure agreements. Companies should also consider notifying key clients of employee departures to alert the necessary parties that such employees no longer have the authority to act on the company’s behalf. Finally, even during a period of layoffs, it is important to part with employees as humanely as possible to avoid aggrieving the parties who once had knowledge of the company’s vulnerabilities.

The Takeaway

The growing number of once-rare derivative claims against directors for oversight failure, coupled with increased cybersecurity risks and the SEC’s proposed cybersecurity rules, still carry implications for D&O insurance underwriters: the higher risks of liability and the underwriting that accounts for board processes and reporting schemes require attention.