Risk and the FTCA: Safe Haven in Cyber

Upshot

The FTCA and Securities and Exchange Act of 1933 provide broad indirect legal powers for the government to fine and impose restrictions on companies that fail to provide demonstrated evidence of "reasonable" security measures, which there is not a well defined universalizable standard of, even if some standards exist that attempt to expose certain basic metrics and hygiene checks for companies. The reality of "reasonable" security is not so simple and requires rigorous defense and practical, well thought out steps by business leaders to avoid unfair penalties and copious restrictions, especially in the early days when extensive security and legal expertise is not practical to acquire.

It's possible to take such steps simply, at a low-cost, in a reasonable manner and in a way that prevents public fallout in the event of a breach (a PR disaster) and prevents the FTC from even imagining you as an attractive target.

The Precedent

Prior to beginning this project, I set out to answer a few questions: what cybersecurity laws apply to businesses and what liability do they create? What can I do, as a business owner or manager, to mitigate risk?

The former question (what laws exist governing cybersecurity requirements) does not have a straightforward answer. There are a few laws covering the protection of consumer data explicitly, however these only apply to businesses involved in "financial activity" as defined by the SEC (https://www.ftc.gov/business-guidance/resources/ftc-safeguards-rule-what-your-business-needs-know). If you are one of these businesses, you apparently have a special burden of reporting and tracing the processing of consumer data per the Gramm-Leach-Bliley Act. Even if you are not involved in "financial activities" as defined by the FTC you might still be liable under the Securities Act of 1933 17(a)(2) if you offer the public a security (https://www.sec.gov/news/press-release/2018-71, https://www.govinfo.gov/content/pkg/COMPS-1884/pdf/COMPS-1884.pdf).

Roughly, it says if you offer the public a security, you can't fib about what's going on in your company that might influence the value of that security. The FTCA has clauses exposing similarly broad privileges for the government, which supports the imposition of almost exactly the same precedent (https://www.govinfo.gov/content/pkg/COMPS-388/pdf/COMPS-388.pdf) in 5(a)(2), a commonly invoked clause in the cases touted by the FTC (https:/ /www.ftc.gov/business-guidance/resources/ftc­safeguards-rule-what-your-business-needs-know).

You might even get COPPA tossed at you (https://www.ftc.gov/enforcement/cases­proceedings/1023120/rockyou-inc), even if you explicitly state that your service is not intended for use by children. Precedent exists such that explicitly decreeing, in the Terms of Service presented to users, that children must not use the service, is not sufficient to provide the protections assured by COPPA. In the case of Rock You! (link in the parenthetical enclosing in the previous sentence), the ire of the FTC was drawn after headlines spread about a massive "32 million" user breach where, according to conjectural statements in tech journals and a post in an online forum, user credentials were retrieved by a malicious actor after being exfiltrated from a database in which they were "stored in clear text".

This alone was an alleged and prosecuted breach of the FTCA (since Rock You! stored passwords in clear text, and, as argued by the FTC, Rock You! should've known that such passwords could've been reused for other services, creating risk for consumers visiting the site), but COPPA was also invoked as it was alleged that> 160,000 children used the Rock You! service, despite a clause in their privacy policy explicitly stating that "if you are under 13 years of age then please do not use or access the Rock You! Sites at any time or in any manner". The complaint alleges that Rock You! "knowingly" accepted user under the age of 13. I'm not sure how that was substantiated. Perhaps the only substantial evidence that could support that claim would be if the FTC had evidence Rock You! collected the age of their users during registration and did not programmatically deny users below the age of 13. But even if they had done this they could still be liable. Because children can just lie. I did this all the time as a kid.

So what exactly is the burden the FTC is implying exists in this case? The capacity to trace and behaviorally, programmatically at the point of registration, technologically intuit the age of a user with reasonable confidence based on ... what? Their attestation? Rock You! was already relying on that in their Terms of Service, which the child could've just read. Their actions were no different than if they had programmatically filtered out users claiming to be 13 and under. So what could they have done? What is the FTC implying? What level of sophistication is required to validate beyond reasonable doubt the age of a user prior to registration of a web service? It doesn't exist. Not today, and definitely not in 2008. So that's unreasonable.

Nonetheless, they were charged, interestingly, with a fine of not more than $16,000 per violation of the rule, which (I didn't dig deeper to see what counted as a discrete violation of COPPA), but, if failing to provide sufficient protections for children under COPPA counts per access of one child to the site, they were looking at a $2 billion fine. Clearly this wasn't the case, the ultimate penalty was $250,000. I'm not sure how that was calculated.

Most of the details that would probably be necessary to defend Rock You! aren't readily available to me. The gist of the FTC's argument, outside the bounds of their application of COPPA, was effectively, Rock You! deceived consumers by failing to provide reasonable security measures, therefore violating the FTCA 5(a)(l). This reasoning has been applied in cases even where the business made no express claims of having or maintaining a reasonable security program (https://www.ftc.gov/sites/ default/files/ documents/ cases/2005/09/092305comp0423160. pdf). This is ironic though, since it's clear that the court probably had almost no way of effectively interpolating between what was required of the business to deliver it's customer experience at the time and the necessity for security, or why it would even be egregious or reasonable to expect that storing passwords in clear text is not a "reasonably'' secure practice (as insane as that sounds to information security practitioners, it's not impossible this was actually the right decision for them). What is "reasonable" in security is nowhere close to a clear line to be drawn, not by a court, not in a few weeks, not without all of the information and trade-offs the business had to practically incur to make the service possible.

Rock You!’s Case

But let's get technical. So ... what, storing passwords in plain text? What does that mean?

You have to think, for a moment, about what "identity" really means. You have to question, for yourself, how you conceive of your notions of people, objects, and associations between lived experience and the conceptual ideas that you use to filter them into categories. Somewhere in your lived experience you believe yourself to have had an experience that has led you to associate a "thing" with an "idea''. So something to you, is "unique", meaning you can differentiate between it and other things in your lived experience. It behaves differently, it speaks differently, it looks or sounds differently, whatever it may be. You can use this idea of things being different to just make up a string of characters that allow you to differentiate between "it", whatever it is, and "other things". So you have a notion of "identity'' as a notion of assigned categorical description of a discrete object in the natural world. You may be confused, and have an overly broad categorical description of things, or you may be too narrow, and have an overly precise description of things. One way or another, you have a notion of "objects" as a connection between experience and ideas.

Is this really how reality is? I don't know. Frankly in some ways this entire model seems just wrong. It works though, and it's what we used to in security. So suppose, then, further that now that we have an object, we also have another challenge: we want to be able to allow that "object" to do certain things. Maybe we want to let an "animal" into our house, a "person" onto our board of directors, or a "computer program" to run our lives and make all our decisions for us (yes this is a dig at OpenAI and ChatGPT). But we're careful, because objects exist in reality, and they move around, come and go, and have their own schedules. We don't want to let the wrong object have access to something critical or sensitive, so we have to know, at the door to our house, or the door to our board room, or the point of a decision making process, that the object we are allowing to do the thing we care about is actually the object we want to allow to do the thing we car about.

This is trivial! You say. Of course we don't need to worry about this, we just look at the object! We hear it speak, we see it's mannerisms, we "feel" it's presence. But ah, we are not in the physical world anymore. You forget, in computing, we are in a blind channel of digital impulses travelling about wires carrying binary signals across the world. The entire context of the challenge has changed ... we can't use our senses anymore, and as it turns out, some of our most critical decision points and trust elements in life manifest through these channels (controlling the behavior of critical infrastructure, administering health care services, flying planes, building relationships, etc.).

So what then, do we do? How could you know, when operating in this conceptual blind channel, without your intuition or senses, that an object is what the object claims to be? Well here we arrive at one key distinction: we cannot use this blind channel alone to establish our traditional notion of "identity'' and "objects". We must first use some alternative means of establishing a notion of "identity" and "object" definitions that we then map to a scheme or protocol to use within the confines of the blind channel to do proof of pre-established association between artifact and traditional notions of "object" and "identity''. This is obvious, right? If you were in a pitch dark room, and you could only ever communicate in an atonal voice with a non-descript series of words that no one could ever tell the difference between, you would find your normal process of assigning the ideas of "objects" and "identity'' impossible.

You would be forced to first draw an additional layer of association between a traditional "object" and "identity" that you could then use in your blind channel. Hence the password. Once we have this mapping, we have our hook in to our traditional notion of "object" and "identity'' from the natural world in our blind channel. We have a specific thing that we transmit in a blind channel that gives us confidence that the transmitter is an "object" or "identity'' we trust because we previously established that trust through an alternative channel. Once we have that thing, we have a flurry of fascinating mechanistic means of protecting that thing, so that people snooping in our blind channel can't access it.

Because that's the tragedy right? We either failed to build a reliable computing system, or we lost our "specific thing that we transmit in a blind channel that gives us confidence that the transmitter is an "object" or "identity" we trust because we previously established that trust through an alternative channel", so our notion of "identity'' and "objects" collapses in our blind channel. We just cannot let that "specific thing that we transmit in a blind channel that gives us confidence that the transmitter is an "object" or "identity'' we trust because we previously established that trust through an alternative channel" be lost.

So since that thing is very special, we don't want it sitting around on a hard drive for anyone to find. And because of the genius of the many brilliant cryptographers before us, we actually have a way of storing that thing in a way such that it is not in it's actual form on the hard drive, and, the only time we ever have to receive the actual "specific thing that we transmit in a blind channel that gives us confidence that the transmitter is an "object" or "identity'' we trust because we previously established that trust through an alternative channel" is at the door ( the door to our house, or the door to our board room, or the point of a decision making process), in the analogy above. So it never stays on disk in it's real form. That's possible. Barring huge revelations in foundational unsolved problems in mathematics, that's reliable.

So ... maybe you didn't follow me, but ... this implies two things:

  1. No one has to store a dear text password if they have the computational resources for performing efficient hashing ("a way of storing that thing in a way such that it is not in it's actual form on the hard drive", in the analogy above).

  2. Additionally, it is important for you, as a business, to understand that when you expose your web service to registration that doesn't require non-digital validation of an identity (eg. just to anyone over the open internet), it is impossible to establish trust. You have given away access to your services to a random person. You have a trust-less relationship with that object. Nothing about the spurious string they've produced implies you can trust them, and any access you've given them is effectively free range access that you may as well have provided to any malicious entity, or child pretending to be 17+, just daring the FTC to screw you with a quarter million dollar fine. What you have is a problem of "managing untrusted users I've given access to my service to" not a problem of "being sure my end users are who they say they are". You have to have the former, because you're a business, and the latter isn't practical ... yet. But you should understand the problem isn't one of perfect proofs of identity and the perfect management thereof, it's managed delegation of computational resources (your services) to untrusted entities.

So that was a long but necessary deviation from my point here, but ... in any case. The important element of the above is the bit about "if they have the computational resources for performing efficient hashing.". Probably the argument went, in 2008, it was completely reasonable for them to have such capacities. Is this really true? 32 million end users hitting an even highly optimized endpoint, maybe up to 1,000,000 per second ... Can modern cryptographic primitives support that? Probably. Was it "reasonable" for Rock You! to avoid that due to limitations in resource? I don't know. It is certainly possible it was not.

If they had the cash on hand for a 60GB next-gen server or ridiculous cloud compute costs, or the best engineers on the market to optimize a custom crypto library then ... yea I guess. But of course this could also just have been a completely reasonable business trade off, of the kind that people have to make all the time when dealing with suboptimal resourcing, less than motivated engineers and small amounts of capital during early growth phases of development. One that is completely unavoidable and inexorable to the process of spinning up and running a digital business. Furthermore they could have compensated with other, cheaper measures, that were just as reasonable for their threat model. They could've simply isolated the host in a segment of their network behind an advanced WAF that was more compute efficient than the hash would've been, maybe that doubled as a load-balancer. They could've prioritized EDR | XDR or strict access management to compensate for the lack of hashing, or they could've put their Falcon ML slider on the most aggressive level on that particular endpoint, they could've tightened their firewall up to highest possible behavioral detection level, they could've implemented a clever networking scheme to proxy and intercept traffic in-transit prior to hitting the endpoint. There's a lot they could've done to avoid the compute costs and loss of efficiency that 32 million end users polling their service for logon attempts, each of which requiring a complex SHA3 / bcrypt hash operation incurred. It is not necessarily unreasonable.

But again, this is just my imagination ... It's perfectly possible this, in reality, was not at all an "unreasonable or deceptive" practice as defined by the FTCA 5(a)(l). It could've been a best effort, a best effort that was ridiculed and slapped with a $250,000 fine. Orrrrr not, and they were greedy and lazy. I don't know. I wasn't there (:

What about the initial access vector (SQL injection), how could Rock You! have avoided the claim that a fairly trivial SQL injection is a breach of consumer trust under the FTCA ... Well, what was the FTC's answer to the reality that such a vulnerability (par for the course back then), existed in their tech stack ... Periodic vulnerability reviews by an independent CISSP. So ... have some guy with some meaningless credential that doesn't map to reality in security at all charge you to mess around on your endpoints for a few hours and produce a report of a few vulnerabilities that maybe do maybe don't get patched that maybe do get exploited maybe don't and just run on a treadmill, burn cash, fail as a business because the FTC is ... not up to speed with the reality of technology or industry.

Well that's unfortunate and gross incompetence on behalf of the FTC. I mean, Rock You! had to comply, by law. But that's stupid. A company can recognize flaws as part of it's architecture and recognize and remediate them without falling victim to the trap of poor performing consulting "CISSP" experts (if casual observers aren’t aware nothing about a CISSP implies a security professional has technical acumen). Moreover, even if there exists a periodic audit or manage detection and response program there is no known way of guaranteeing even obvious and evident flaws don’t exist in the system or network. The cause of breaches is very rarely actual gross negligence and incapacity to comply with simple security measures, even if it seems so post-mordem to investigators. The one thing you learn working with technology is the thing that causes you the most time (days sometimes) and is the hardest to investigate and understand is always some silly little slip up in the structure of your boolean logic or a subtle typo. These things happen and they cost engineers and tech companies dearly, but they aren’t simple negligence and they are a reality of technology. Defect-less systems are practically unheard of and definitely impractical, especially at the current state of the industry.

I’ve only one time in my life heard of a verifiably defect less system, and that was purely anecdotal. Never once in my 7 years of engineering experience have I ever observed or known there to be a verifiably defect-less system, even one free from the simple and obvious mistakes like SQL injection or misconfiguration. The reality of engineering is such that bugs and misconfigurations exist, and their presence does not necessitate negligence. So clearly the notion of “reasonable” security is yet to be defined and perhaps their exists some very dubious precedent already set regarding it’s nature.

Obviously you want periodic vulnerability scanning and management. And either managed detection and response or occasional penetration testing is certainly an important consideration. But to simply avoid the FTCA itsel£ you don't even need that. You need to demonstrate, beyond a "reasonable" doubt, that you haven't violated a "good faith" effort to not be unfair or deceptive in your business practices.

There are lot's of ways to do that, even in the event of a breach. The impact of a data breach shouldn’t be exponentially compounded by retaliatory legal action or exaggerated reactions from the press, unless of course there was indeed true negligence involved, but was true negligence looks like in a technical context is far harder to define than most of the cases the FTC has prosecuted so far account for. The toughest part of this process is consumer and policymaker education. Tech companies can’t be beholden to the misguided perceptions of the public. It’s important to institute reasonableness in legal and brand material. Companies can thrive after a breach. The law should be a guide for companies to nudge them in a direction that assure they are using good judgement in executing their mission, not a reactional framework for compounding the costs of a data breach post mordem.

It's more important to write a book that can be read by an audience, even with a few silly mistakes, than sit around and plug every typo, right? Bias for action, right? "Just do it", Nike, no? It's imperative in business. If you can demonstrate that you've taken reasonable steps to reward customers by making calculated risks that involve providing the greatest return with a complete understanding of the challenges of the technology business, you can plow forward without worrying about the FTCA or Securities and Exchange Act of 1933. Better have a good lawyer who understands this stuff (come back to me in a few years when I finish my JD).

Avoiding Liability in Cybersecurity with Tight Budget Limitations

For now though, practically, you can make an excellent case for vulnerability management as a fluid process involving real world practice and refinement (Agile ... yea?). As an accepted risk of business practice, in many cases inexorable from doing business digitally. If you have an APT threat model. .. well... you have other problems. If you are some small guy trying to avoid a script kiddy (inexperienced and unsophisticated hacker using automated tools and frameworks) ... and you can't afford CISSP of the stars to screw you for a few hours of automated script scanning with a few minor adjustments based on their best effort from experience, you're going to have to be nimble, think on your feet, and use what you have.

FTCA compliance could be assured entirely by reasonable efforts made by ... even the business owners themselves. You don't even need 6 figure security talent. "Reasonable" security can be accomplished by anyone with the slightest amount of technical acumen. Just learn a little bit. Make a VM. Toy around with the tools. Learn the technology powering your business. Of course you cannot be grossly negligent. But you, as a manager of a startup, founder of your exciting new company, can take simple, practical steps to cover your bases in the event of a breach, from the public, from the FTC, from anyone, so you can keep going ... even if you get picked on by hackers or screw up.

But you need to get ahead of things ...

You need:

  1. Documented proof of a "more than reasonable" effort made by you that can blow any FTC prosecutor out of the water when they try to claim you breached your burden of providing "reasonable" security measures. Seriously, if you want to avoid a press hell storm and a terrible fine ... you need to think through what it means to implement a "reasonable" security protocol such that no one, at any level of sophistication or legal training, could screw you. You need to be ahead of any possible claim of negligence and shut it down ... before it even happens. Anticipate the bad press releases. Prepare for the breach. Prepare your legal and philosophical arguments describing why what you did was actually right. Don't get screwed by some uneducated lawyer. They don't know how hard it is for businesses, especially tech business, to succeed.

  2. Some documentation describing the steps you took, as technologists and business operators, to personally assure the secure architecture and vulnerability-less nature of your system.

  3. A fluid, adaptive and continually evolving plan describing and managing the risks involved in the business.

  4. Some actual effort to reduce those risks. Don't lie. But seriously ... it's probably not as much as you think. You just don't need APT level defence all the time, especially as a small or medium sized business. Security shouldn’t be a means of holding the business hostage, like a lot of practitioners and businesses in the field attempt to do.

Here’s a key piece of advice for business leaders choosing security solution providers: if you’re security provider isn’t trying to reduce your costs (either your security team, managed service provider, contractor, etc), they aren’t looking out for your interests. Security is both a solvable problem and not one that requires endlessly growing amounts of capital. Cyber security is not the only field that is exempt from considering it’s financial impact on the business and the reality that security can be excessively costly is one that must be considered. If the cost of the security solution(s) or security team is more expensive than the anticipated cost of a breach, there’s a problem.

If you find this possibility interesting I encourage you to look at the ratio between the estimated costs to US businesses of all the cyber crimes reported to the FBI’s IC3 in 2022 and the market cap the cybersecurity industry. They are not even close to proportional.

It can be so, so much cheaper and easier for business to mitigate the actual risks they face in a manner that

  1. Manages the fallout in light of a breach publicly, with the right signaling and an honest demonstration of the indeed more than reasonable steps they actually took to secure their architecture before hand, showing they did care and think about the problems (which I bet .. even Rock You!, might've done ... ).

  2. Prevents the FTC from even wanting to go after you, because they know they will lose and cannot craft a case.

  3. Returns cash to your bottom line to fuel your future and drive your innovation and dreams ... Your dreams. Your businesses success. Your future, and everyone’s prosperity.

There's a lot of great people at really cool companies out there in the security space, but ... like in all of tech, it is so easy for them to mistake their passions for the simple devotion of a real bond between customer and business. A bond that delivers value, that understands customers, and that puts their needs before the businesses passions.

Note: The remainder of this article is a pitch for my consulting services. I’ve since moved away from my consulting practice but included it below for completeness and context.

As a consultant, I strive to deliver the above three points, at the least cost possible, whether you have an APT threat model or a script kiddy on a Kali box who just learned MSF, or anything in between. I can complement and raise the value of your security team exponentially, in addition to whatever advanced APT threat model software (EDR, DLP, whatever other crazy fancy stuff) you have, or maybe ... instead of it. I strive to provide you with the least cost security possible that protects your interests and avoids all of the unnecessary risks (from good guys and bad guys) you might incur in your path to profitability. I believe there are new corners of the security market yet to emerge, new paradigms yet to be found, and new frontiers to discover. I believe in new ways to serve you, business leaders, in fueling your mission. That's why I started Sour.

I hope I can find a way to help you!

If you like the work above, want more details, or want to hear a proposal for a project, please reach out to me directly: [redacted]

I have 3 years experience in EDR, I'm currently working on passing the bar in Washington and getting my CISSP, and I'm the guy that can help you find the lowest cost security services that create the least risk for your business to excel. In a few years I'll even be able to go to court to defend you.