Oh, the things that we didn't know enough to worry about five years ago: Cars that can be remotely "bricked" via computer; home automation controllers that can be compromised to distribute spam; "smart" light bulbs that can be hacked to reveal Wi-Fi passwords.
Now that the Internet of Things (IoT) is upon us, these things have all made the headlines lately. But I'm not here to say that the IoT is a terrible idea and we all need to reject it before it destroys our last vestiges of security and privacy. My take is that what is going on with the IoT is predictable, preventable and fixable. I say this because when it comes to rolling out new technologies, history has a way of repeating itself.
Think about it: Every time a new and cool technology is released, the early adopters find out within a month or two that it has some gaping security hole.
Why? Is this pattern really inevitable? My colleague Gary McGraw wrote in 2006 about "The Trinity of Trouble"-connectivity, extensibility and complexity - that underlies the introduction of security holes in new technology products. Those factors certainly contribute to the problem, but I see things a bit differently. I think of the root causes as naïveté, ignorance and laziness, or "Nail"for short. Here's why.
When diving into new technologies like connected refrigerators and thermostats, product developers tend to be naive about threats, ignorant of security controls and/or too lazy to fully learn and hence implement things the right way the first time.
Naïveté. Product developers tend to naively underestimate threats, particularly if they are new to them. They don't appreciate the lengths to which adversaries will go in researching their new products for possible vulnerabilities and developing tailor-made attacks against those vulnerabilities. Inevitably, they are surprised when vulnerabilities and attacks are disclosed. Caught off guard, they often rush a solution out the door that solves nothing and possibly even makes things worse.
Ignorance. Being naive about the threats their products will face, developers are naturally ignorant about the security controls that they can and should be implementing in their products. Things like end-to-end link encryption, strong mutual authentication, threat modeling and code reviews are often simply ignored until it's too late.
Laziness. All right, I know this sounds harsh, but when developers are aware of security controls and still don't implement them, my perspective from the outside is that they're just too lazy to do things right the first time.
The three elements of Nail are all quite understandable, though I can't quite forgive them. They are understandable because product development is fiercely competitive, with companies under intense pressure for their new technologies to be first in the market. The thinking seems to be that once the new product has a strong foothold in the market, they will be able to go back and bolster security. The unforgivable part is that tomorrow never comes - or only comes when some researcher publishes a paper exposing a gaping security hole for all the world to see.
Sign up for Computerworld eNewsletters.