Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Apple’s Core ML: The pros and cons

Matt Asay | June 19, 2017
Apple’s impressive iOS machine learning technology teeters between its limits and its ease of adoption by developers.

That lack of federated learning may be particularly thorny for the Apple-verse—especially because Google has advanced such federated learning. As Google research scientists Brendan McMahan and Daniel Ramage write,

Machine learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud.

Here’s how it works, they wrote:

Your device downloads the current model, improves it by learning from data on your phone, and then summarizes the changes as a small focused update. Only this update to the model is sent to the cloud, using encrypted communication, where it is immediately averaged with other user updates to improve the shared model. All the training data remains on your device, and no individual updates are stored in the cloud.

In other words, instead of harnessing an army of servers in the cloud, you can harness an army of mobile devices in the field, which has much more potential. Equally (or more) important, this improved model is immediately available to the device, making the user experience personalized without having to wait for the tweaked model to round-trip from the cloud. As developer Matt Newton has highlighted, “It could be a killer feature to have easy APIs for doing personalization all on devices.”

Sure, federated learning isn’t perfect, as McMahan and Ramage acknowledge:

Applying federated learning requires machine learning practitioners to adopt new tools and a new way of thinking: model development, training, and evaluation with no direct access to or labeling of raw data, with communication cost as a limiting factor.

Even so, the upside outweighs the downside, giving researchers compelling reasons to confront the challenges.

 

With Core ML, has Apple underdelivered again?

You could look at this as yet another example of Apple falling behind its peers. From iCloud to Apple Maps and even Siri, Apple has either been late or underpowered relative to cloud and AI heavyweights like Google. With Core ML, I’m not so sure. The “Apple got it wrong” contention feels misplaced or, at best, premature.

For example, when Amazon Web Services released its own developer-facing machine learning services like Rekognition, Polly, and Lex, there were similar complaints that it was too basic or limited. But as Swaminathan Sivasubramanian, general manager for AWS, said of these services, the goal “is to bring machine learning to every AWS developer,” and not to overwhelm them with the inherent complexity of machine learning.

In similar manner, Apple is paving an easy path to getting started with machie learning. It’s not perfect, and it won’t go far enough for some developers. But it’s a good way to raise a generation of developers on the potential for machine learning.

 

Previous Page  1  2  3  Next Page 

Sign up for Computerworld eNewsletters.