How much of it has to do with liability, though? In many cases a security breach won't lead to dire consequences for the employee but the IT department.
MH: That's exactly right, which gets to the economic factors. You could say some of it is budget, but it's also the economic principle of moral hazards, where, a third party -- and in this case it could be an employee, an outsourced provider, or a number of different people -- is not responsible or who have to deal with the issue. Even things with malicious code, where somebody's system is compromised and has attacked somebody else's: It's not necessarily stealing from the host, it's owning the host to go launch on attack on somebody. Well, what incentive do I have to protect somebody else?
It seems like a lot of users perceive the benefit of security as moot. If you have a keylogger on your system, for example, it doesn't matter how secure your password is. Or if an unpatched Windows PC can be infected in 12 minutes, what's the point in even basic desktop security? How do you deal with that?
MH: Those examples I don't necessarily agree with, but there are cases where you do have to go, "Is the control actually reducing the risk?" And that's the challenge that I think security teams have to think about as well, thinking more broadly. What is the payback on the control in terms of real risk reduction? Or, which ones are what I would call the marquee controls? Yes, from an industry perspective there are these 15 controls you have to have, but which three will give you the most risk reduction, so that way you can concentrate on making sure it's deployed. And you can actually continue to operate, and operate effectively. A lot of time people will put these controls in place and policies or whatever and there's no oversight to make sure the control is operating in the way it's supposed to.
So many of the people we surveyed in our annual security research study said they don't know if they have had a security breach and, if they did, they didn't know what happened. How can CIOs start turning that around?
MH: Sometimes the CIOs might not know. I think in many cases the security team knows. What I've seen in talking to my peers, and even at Intel, if you go back seven years ago, there was knowledge in the security team about things but this over-paranoia. "We can't say that this happened." Why? Sometimes they could be perceived that something happen and the security team didn't protect against it. Other times, it could be PR or legal reasons around what you share and how you share it. In some case I think it becomes how the security team should share the information, not whether or not they should. They've got to get past that communications barrier. And other times, you don't know what you don't know. As I said before, with the subtlety of intrusion attacks, it's hard to be aware of them. And if you need good reporting information, where you might have a policy to report when a PC or laptop is lost or stolen. Unless you build in that control, coupled with the point of wanting a new one, you don't create the forcing function to potentially recognize all the lost devices.
Sign up for Computerworld eNewsletters.