Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Your next digital security guard should be more like RoboCop

Mike Paquette, VP of Security Products, Prelert | June 5, 2015
Machine intelligence can be used to police networks and fill gaps where the available resources and capabilities of human intelligence are clearly falling short

Putting RoboCop on the case

Machine intelligence can be used to police massive networks and help fill gaps where the available resources and capabilities of human intelligence are clearly falling short. It's a bit like letting RoboCop police the streets, but in this case the main armament is statistical algorithms. More specifically, statistics can be used to identify abnormal and potentially malicious activity as it occurs.

According to Dave Shackleford, an analyst at SANS Institute and author of its 2014 Analytics and Intelligence Survey, "one of the biggest challenges security organizations face is lack of visibility into what's happening in the environment." The survey of 350 IT professionals asked why they have difficulty identifying threats and a top response was their inability to understand and baseline "normal behavior." It's something that humans just can't do in complex environments, and since we're not able to distinguish normal behavior, we can't see abnormal behavior.

Instead of relying on humans looking at graphs on big screen monitors, or human-defined rules and thresholds to raise flags, machines can learn what normal behavior looks like, adjusting in real time and becoming smarter as they processes more information. What's more, machines possess the speed required to process the massive amount of information that networks create, and they can do it in near-real time. Some networks process terabytes of data every second, while humans, on the other hand, can process no more than 60 bits per second.

Putting aside the need for speed and capacity, a larger issue with the traditional way of monitoring for security issues is rules are dumb. That's not just name calling either, they're literally dumb. Humans set rules that tell the machine how to act and what to do the speed and processing capacity is irrelevant. While rule-based monitoring systems can be very complex, they're still built on a basic "if this, then do that" formula. Enabling machines to think for themselves and feed better data and insight to the humans that rely on them is what will really improve security.

It's almost absurd to not have a layer of security that thinks for itself. Imagine in the physical world if someone was crossing the border every day with a wheelbarrow full of dirt and the customs agents, being diligent at their jobs and following the rules, were sifting through that dirt day after day, never finding what they thought they were looking for. Even though that same person repeatedly crosses the border with a wheelbarrow full of dirt, no one ever thinks to look at the wheelbarrow. If they had, they would have quickly learned he's been stealing wheelbarrows the whole time!


Previous Page  1  2  3  Next Page 

Sign up for Computerworld eNewsletters.