Nine Years Later: The Cybersecurity Issues at the Heart of the ‘Ashley Madison’ Scandal

Jun 18, 2024 | Blog
Partner

A new docuseries about the dating site Ashley Madison was recently released on Netflix. Titled Ashley Madison: Sex, Lies & Scandal, the three-episode series chronicles the 2015 scandal that ensued when a hacker group exposed the identities of the users of the Ashley Madison Agency’s dating website.

For those who may be unfamiliar, Ashley Madison is a dating site that caters to married adults looking to engage in extramarital affairs. The site had 37 million users at the time it was hacked in 2015 by what appears to have been an activist group. The activist group first leaked the email addresses of all the site’s users, and then leaked the contents of their accounts, including all their messages with the people they met on the site. The fall-out was brutal, as you can imagine, and Ashley Madison did eventually pay $11.2 million to settle a class action and an additional $1.6 million in fines to the Federal Trade Commission (FTC).

While there are perhaps many moral considerations about this story upon which to ruminate, there are also a lot of legal ones. From that perspective, if there was anything that could be done wrong, Ashley Madison did it: unfair and negligent practices, poor response to the data breach, straight up fraud, and subversive use of artificial intelligence. The ultimate data breach was really a question of “when,” not “if.”

The story also shines a light on a gap in privacy law between what data breach laws cover and the elements of personal information covered by the new comprehensive data protection statutes. The information leaked by the hackers in this case included names, email addresses, and lewd images. None of those elements, even together, create an obligation to notify data subjects under any state breach law.

Without the addition of a driver’s license number, social security number, financial account information, or in a few states, a birthdate, Ashley Madison had no obligation to notify either its customers or any Attorneys General of the breach. It is a great irony that the exact kind of information that is the very most sensitive, isn’t covered. And for the most part, the new comprehensive statutes don’t offer a private right of action that would offer relief either. Without the very strong fraud and negligence claims against Ashley Madison, its customers may well have been without a remedy.

However, the company’s poor privacy practices and the unfulfilled promises it made only served to help build a case against it. Section 5 of the Federal Trade Commission Act prohibits “unfair and deceptive trade practices affecting commerce.” This broad consumer protection provision gives the FTC the authority to police all manner of business practices*, including privacy practices. “Unfair” practices are those that are negligent; “deceptive” practices are, as the name suggests, dishonest statements. Ashley Madison engaged in both.

As a company built on the prospect of adultery, Ashley Madison had to promise its customers that their secrets were safe. So it did. When the company couldn’t find enough legitimate third-party security badges to place on the bottom of its website, it created them. Ashley Madison promised that for a $19 charge, consumers could ensure that their information was completely wiped from its servers; this turned out to be a lie. Needless to say, the FTC didn’t have any trouble finding that the company’s practices were deceptive.

At the same time it was making promises of heightened security, Ashley Madison failed to:

  1. have a written organizational security policy
  2. implement reasonable security measures, including failing to ensure that ex-employees’ passwords were disabled
  3. do any due diligence as to the security practices of its vendors
  4. use any reasonable available tools to monitor access to its systems
  5. store passwords and encryption keys as anything other than plain text files in a Google drive

This last failure led directly to the hack that impacted 37 million couples around the world. The FTC easily proved that Ashley Madison was negligent with respect to data security and that the practices were therefore “unfair.”

Among its other deceptive practices, Ashley Madison also deceived its users about how many women had accounts on the site. Of the U.S. user base, 1 in 5 real profiles belonged to women. The company felt it should “level the field” regarding gender, and so it began creating fake profiles using stock photos and vocabulary created by studying the language its few real female users used.

These fake accounts were mostly managed by staffers who would engage in real-time chats and emails with users. But the company also used chatbots. These chatbots would send a male user a message not long after he had created his account—but in order to message back, he would have to buy credits. The users thought they were paying money to talk to humans, but this was not the case. Some users caught on, but when they complained, many of them reported that Ashley Madison threatened to mail the records of their activities to their home addresses.

These were the chatbots of 2010-2014. And while the AI that we are currently encountering in the form of assistants and question prompting on social media is terrible, you can imagine that the bots in the next iteration of Ashley Madison could be capable of something approaching conversation. Toying with someone’s emotions by making them think they are talking to a human seems particularly dangerous.

The legal lesson here is transparency. A company should tell consumers what it’s doing with their data and tell them if it is using AI in any way that could impact them. Additionally, every new use of artificial intelligence should be accompanied by its own impact assessment.

While it’s true that data security is an ever-changing landscape in terms of both tools and threats, some basic principles will always hold true. In short, companies should learn from the mistakes of Ashley Madison.

If your company needs assistance in shoring up its data protection practices and documentation or analyzing the potential legal risks of a new AI project, contact Tara Aaron-Stelluto.

 

*Just this week, the FTC announced action against Adobe and two of its executives for the unfair business practice of making it “absurdly difficult” to cancel an account.

Barton LLP
Privacy Overview

Our website uses certain cookies to enhance site navigation, analyze website usage, and assist in marketing efforts that may collect your personal information. You can accept or reject these cookies.