PAUL ROSENZWEIG

THE EVOLVING LANDSCAPE OF CYBERSECURITY LIABILITY

Sitting in a small group setting with cyber policy experts in Washington, D.C., I heard a well-respected cyber policy analyst say: “The cyber security of the Internet of Things is a national security issue. It is long past time for the law to impose liability on those who write insecure code.”

The implications of this statement are far reaching.  Let’s take the automobile industry as an example.  For developers and manufacturers, the security of the systems they are deploying in cars they are currently designing is a matter of good engineering.  They are worried about safety, effectiveness, cost, and efficiency. However, in Washington, government is worried about cyber-attacks.  And with Washington being Washington, the way they will act is the way they do best – through law. 

In my judgment, we are on the cusp of a liability revolution that will change the way companies do business.   Today, for the most part, liability for cybersecurity failures – meaning those responsible for writing code that can be manipulated to perform in a way contrary to that anticipated by its writers -  is almost non-existent. The standard shrink-wrap contract that comes with any software package disclaims liability and for the most part courts have upheld those contracts.  Responsibility (and liability) lies with the end user.

In short, the software liability system is, today, almost exactly where the rules for automotive liability were in the mid-1960s.   In 1916, Justice Benjamin Cardozo first advanced the idea that anyone who manufactures an “inherently dangerous” product (in that case, a Buick car) was responsible for making sure that it was safe.  Here we had the first idea that if you aren’t careful when you make a dangerous consumer good – that is, in the language of the law, if you are “negligent” in how you build something – you might be liable for the consequences. Consider what it would mean to be careful in, say, writing code for autonomous vehicles. The auto industry’s response to that decision is instructive – it started putting disclaimers of liability into its contracts. 

Then came the product liability revolution – the idea that manufacturers were liable for any defect in their products.  This change in law does not speak about care or negligence.  It speaks of design defects and was the precursor of what we have come to think of as strict product liability – if you build it and it malfunctions, you are responsible. This change in law was driven by many sociological factors – the growing industrialization of America; a sense of unfairness and injustice that the small guy should pay; and some of the safety issues and disasters that were beginning to plague the auto industry and that led to so many of the safety standards that are now deeply embedded in the work of car manufacturers. 

We will likely face a similar set of circumstances soon where a safety “disaster” will occur and a company will be held liable for poor code in consumer products.  I don’t know what “Internet of Things” field such a disaster will occur in.  But imagine if an autonomous bus were to drive off the road killing 30 school children.  Or imagine if ransomware akin to WannaCry exploited a vulnerability in a line of cars, making tens of thousands of them inert.  There would certainly be a public uproar and changes to law. 

Where life and death are at issue, responsibility and liability cannot be far behind.  This will have impacts on the insurance industry and may result in regulatory intervention if the industry is not proactive in its approach.  In addition to best practices for cybersecurity and standards of software development, this will also require the development of, as yet non-existent, audit and grading mechanisms to support insurance risk rating.   

One cautionary note reflects a near certainty.  The consumer IoT industry should work hard to support the development of a liability standard and a functioning insurance market.  Not for its own sake, though that will be good enough reason itself, but rather because the alternative is even worse.

Sometimes companies will have a “put your head in the sand” sort of attitude.  They hope that they can forestall software security liability for the foreseeable future.  I think that is highly unlikely, and that if industry groups don’t proactively set the goal of helping in the development of cybersecurity liability standards then Washington – who considers this a national security issue -- will intervene and set the standards for them.

We’ve already seen some rumblings in Washington in that direction.  In the wake of the Mirai botnet attacks last year, which took advantage of insecurity in small consumer devices like thermostats and DVRs, the Federal Trade Commission began considering the need for security regulations.  The National Highway Transportation Safety Administration has published a report on “best practices” in automotive cybersecurity.  Automobile manufacturers ignore these Federal initiatives at their own peril. If they do, today’s “best practices” will become regulatory or judicial mandates – mandates that are rigid and change on, say, a three-year cycle.  Think how that will affect code development.

So, what, in the end, does a good system of cybersecurity look like – not from a technical perspective but from the policy maker in Washington who is contemplating the question from a broader national perspective?  Here I offer a few process oriented suggestions, drawn both from the NHTSA best practices study and a non-profit group of security professionals known as I Am The Cavalry.  To my mind, the automobile industry should have a voluntary set of standards and a self-assessment model for the cybersecurity of its product that asks, broadly speaking the following questions:

  • Can you explain to policy makers and insurers how it is that you design and develop your software products?  Do you do adversarial testing programs for your products and for critical components of your supply chain? If not, why not?
  • Are you open to third party research that finds flaws in your systems?  Too often developers are resistant to outside scrutiny.  If you have a good-faith report of a problem, how do you respond to it?
  • What are the forensics of your systems?  Do they provide tamper evident, forensically-sound logging and evidence capture to facilitate safety investigations?
  • Are your systems capable of being securely updated in a prompt and agile way?  I gave advice to one client (not, I hasten to add, a car manufacturer) who had a system that was, effectively unpatchable.  My own opinion was then, and remains today, that such a design is almost per se negligent.
  • Finally, how are your cyber systems incorporated in the physical vehicles you are building?  Is there, for example, a physical and logical isolation that separate critical systems from non-critical systems?  The Jeep Cherokee hack from last year was partially attributed to the existence of a unitary system of control and that, too, will, I think, no longer be acceptable.
So.  Where does all that leave us?

First, I see no prospect in the long run for avoiding liability for insecure code.  The only question is whether it will be absolute liability or liability based on a reasonable care/negligence standard.  Because an absolute liability standard will severely stifle innovation, the IoT industry should advocate for the more reasonable negligence standard.

Second, liability is all about money.  And that means that inevitably we will see the development of an insurance industry with requirements to evaluate or rate cybersecurity risk. One of the critical problems for the IoT industry is figuring out how to do that, because if you can’t, the natural tendency will be to impose some form of absolute liability.

Third, industry owes itself the obligation of trying to get ahead of the curve.  If they don’t help to design the liability system now, someone will design it for them and I suspect they will like it a lot less than if they had built it themselves.

Finally, do not under any circumstances, make the mistake of thinking that this is a technical problem.  It is not.  It is a social and economic issue of the highest order, both for the automotive industry and for all the industries that manufacture life-critical IoT devices.  From the perspective of Washington this really is a national security problem. And if you make the category mistake of thinking otherwise … well … then you run the very real risk of being drafted into the fight.

Paul Rosenzweig is a senior advisor at The Chertoff Group and former deputy assistant secretary for Policy at the U.S. Department of Homeland Security. This blog is based on a speech given to ESCAR 2017 in Michigan in June 2017.  My thanks to ESCAR for the invitation to speak.
Schedule a Consultation

Contact us today to learn what we can do for you.

Schedule a Consultation