It's 2018 and the onslaught of information system compromises and data breaches does not appear to be leveling off. If anything, they seem to be getting worse in both frequency and magnitude. Most neutral websites tracking this kind of statistic demonstrate a similar pattern.
Why is this the case? After all, you would think that in a failure-rich environment there would be plenty of feedback to inform the engineering process. Shouldn't information system security engineering be like civil or aviation engineering - where the number of catastrophic failures goes down over time?
It turns out that the general practice of "information system security engineering" bears little resemblance to other, more mature engineering disciplines. But don't just take my word for it. Here's Janet Oren on the topic (Oren 2013):
“Just as you would not build an airplane without a safety engineer or a bridge without a structural engineer, you should not build an information system destined to be connected to cyberspace without a security engineer. But this continues to occur for critical systems in many commercial and government industries."
"The concept of systems security engineering was relatively well understood in 1970 to mean independent verification of the acceptability of security controls. However, we find ourselves in 2013 with no generally accepted process for performing this function and integrating it with systems engineering and the system-development lifecycle even though that is what has been envisioned for over 40 years."
"Systems security engineers need to be experts in the field, they need a process to execute, and they need to be integrated with systems engineering."
The diagnosis of the problem is summed up by the last sentence above; you can't engineer something without the services of an engineer. But I would argue that having one won't guarantee total success either because the science behind security is also immature. Engineering processes can establish a certain degree of rigour and quality assurance but they can't provide an answer for which security mechanisms are appropriate against which threat loads and in what combination. We have heuristics and "best practices" but really very little in the way of guarantees.
Marcus Ranum summarized the state of computer security in 2007 by identifying The Six Dumbest Ideas:
- Default Permit: Allowing anything by default and only denying it when it turns out to be bad.
- Enumerating Badness: The dumb idea behind a huge number of security products and systems (anti-virus, intrusion detection, application security, deep packet inspection firewalls)
- Penetrate and Patch: Make code without thinking about security. Rely on tools that didn't think about security. Attack the resulting product. Find holes. Write code to plug the holes. Rinse. Repeat.
- Hacking is cool: People who hack seem very impressive. And the results are often spectacularly embarrassing. Who wouldn't find that cool?
- Educating Users: Relying on users as an element of security is a poor choice.
- Action is Better than Inaction: Careful options analysis usually takes a back seat to the appearance of doing something.
The second, third, and fourth items in his list summarize the behaviours that result from poor risk management and security engineering. Essentially, security is an operational problem and is applied once the security nightmare is completed. Effort is consistently invested in detecting when holes open up to allow adversaries in (enumerating badness). Effort is also invested in "hacking" systems towards a more secure posture. And, of course, hacking is cool because you get to make a presentation showing how ingenious you are.
The point that Marcus makes is that money would be better spent understanding the limits of security and building systems in a secure manner from the outset. And, indeed, this point about understanding the limits is crucial to risk management. No architecture, design, or technology is bullet-proof (although some can get close) and there is always a degree of exposure which must be quantified before the crown jewels are committed. However, security is often quoted as a binary term - "the system is secure" - as opposed to a limiting term - "the system is this secure". As a result, risk managers are continuously surprised when they get hacked.
It is not going to be easy to unharness ourselves from these dumb ideas because whole industries have grown up around them. And, because available feedback loops are not being enforced and analyzed, there is no way to determine how effective these industries are at achieving their claims. There is, however, plenty of evidence to show that they are consistently failing. In fact, more than one security vendor has had to reign in claims when the companies they protect has been thoroughly owned.
Information systems are not bridges or buildings. As a society, we have come to accept the fact that most software ships with failure built-in. But there are a few places where society has reserved the right to insist that information system security be taken seriously from an engineering perspective. These are, most notably, the air and automotive industry where failures can be a disaster. Everywhere else, we seem comfortably numb to the regular loss of our personal, medical and financial information. After all, while they can be annoying, these losses haven't killed anyone en-masse yet.
Unfortunately, we are turning an important page in history. Cars are driving themselves. Planes are landing themselves. Robots and drones are being equipped with guns. Our homes contain devices that can be used to control our environment (and a whole lot more). Cities are being "connected" to cars and people. It seems as though "Maximum Overdrive" could be pulled out of the archives for a redo and actually have a credible technical basis for its plot.
So the onus is on information system projects to not only engage an IT system engineer but also an information system security engineer to ensure that security requirements are gathered, allocated, validated, and verified. This is the only way to achieve the ultimate risk management objective which is when executives aren't surprised when their business gets hacked.