Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

How Apple is improving mobile app security

Marco Tabini | Sept. 3, 2013
Security, always one of iOS's strong selling points, has come under a lot of scrutiny lately. But Apple has a multitude of ways to prevent malicious code from ever reaching our devices.

Buried treasure
And this is where iOS's software-based defenses kick in. Each app that runs on an iPhone or iPad is allowed to read and write files only inside a virtual "sandbox" that the operating system creates for it. Any attempt to access data outside of the sandbox is rejected outright, thus effectively allowing apps to communicate with each other only through approved channels that Apple has put in place.

For all practical purposes, the sandbox prevents a malicious app that has managed to slip through the review process from siphoning data that belongs to another app (like, say, online banking software) without the user's knowledge. Because sandboxing is implemented at the lowest levels of the operating system, it is very hard for a hacker to circumvent its security model—unless the user is operating a jailbroken device.

To make a hacker's life even harder, iOS clearly separates areas of memory that are dedicated to code from those that are supposed to contain only data, making it impossible—in theory, anyway—for the latter to spill into the former. This prevents an app from downloading code from the Internet when the user runs it; this keeps the app from bypassing the review process altogether and potentially unleashing all sorts of trouble.

Anatomy of a heist
Unfortunately, even all this technology is no match for the wits of a determined hacker. For one thing, while sandboxing prevents apps from accessing each other's data, it doesn't necessarily stop them from accessing information that, under the appropriate circumstances, would be available to third-party software, like the user's contacts or photo albums.

Instead, malicious access to these resources is normally flagged by Apple's reviewers by observing the app in action and examining its binary code—which means that an app that manages to evade Apple's analysis tools will potentially be able to access everything from your messages to those pictures you really wanted to keep private.

Due to the dynamic nature of iOS's underlying technologies, this is not as hard to do as it may sound. Even a moderately skilled developer could write code that, for example,takes two seemingly unrelated words, encrypts them, and combines them to form the name of a private API. The final bit of code thus doesn't come into action until the app is run; it's a bit like trying to smuggle a gun onboard an aircraft by breaking it down into its individual parts.

However, naïve implementations of this technique still leave telltale signs that a sufficiently sophisticated static analyzer can detect—bullets viewed by an X-ray machine still look like bullets, after all. These attempts are almost always discovered and blocked by app reviewers well before they manage to make their way onto a user's device.

 

Previous Page  1  2  3  Next Page 

Sign up for Computerworld eNewsletters.