* Every time (and I do mean every time) I get to look at the entries in an organization's risk register, I see a fundamental problem. Most of the entries reflect control deficiencies like "failure to patch in a timely manner." The problem is these risk registers also require the user to provide a likelihood and impact rating for the issue and the users invariably rate the likelihood of that deficiency occurring, and the impact of some event that might occur as a result. That's like saying every time the batteries on a smoke detector fail the house will burn down. The result in most cases is grossly overstated risk ratings, which leads either to people ignoring the risk register because they intuitively sense it's inaccurate or, maybe worse, actually letting it guide their decisions. If you're going to use likelihood and impact ratings, it only makes sense to do so on scenarios that represent an actual loss event -- e.g., compromise of sensitive data via a malware attack.
Let me add one more thing that might help to put it into perspective. In order to manage an organization cost-effectively, decision-makers have to make well-informed decisions. In order to make well-informed decisions, they have to be able to compare the issues on their plates, including: opportunities, operational costs, and risk issues of different flavors. In order to make these effective comparisons they have to have meaningful measurements (apples to apples), and in order to have meaningful measurements you have to have a accurate model of the problem to be measured, which will inform you on what to measure and how to use the measurements. Recognizing that no model is perfect, our industry has operated from models that are so badly broken that the ability to manage risk cost-effectively is a complete crapshoot.
Hutton: Why do most risk management programs fail? My take:
1.) We think we understand risk. But, similar to Jack's thoughts, the reality is, what is risk? What creates it and how is it measured? These things in and of themselves are evolving hypothesis. Our practitioners, industry groups like ISC^2 and ISACA, standards bodies like NIST and the ISO, all their efforts are all focused on telling you what to do, when the fact is, they shouldn't be. Formalizing risk standards and models is counter-productive to innovation.
An analogy: What if 100 years ago the International Standards Organization for Physics (ISOP) settled on J.J. Thompson's Plum Pudding Model of atomic theory (in which atoms were thought to contain electrons), and then decided not to implement scientific method to disprove that model? Now, what if ISOP created a document that formalized the Pudding Model and industry and science had to simply then take that Pudding Model as "the way to do things?" And what if practitioners suffered negative incentives should they think of innovating beyond that model? That's exactly what our industry is doing to us. And the current Geo-Political marketing around "cyber" isn't helping.
Sign up for Computerworld eNewsletters.