The Android-based HTC One M8 introduced in March 2014 had two cameras and simulated bokeh, which had mixed performance depending on the scenes in question. Google and Apple's object-recognition has improved dramatically in the intervening period. A pioneer in this field, Marc Levoy, released a proof-of-concept iOS app years ago, SynthCam, that captured video frames to produce this effect; it's much more simple with multiple cameras in a single device. Levoy retired from a professorship at Stanford to go to work for Google.
This should be just the beginning, regardless of what Apple chooses to implement in the Camera app. With two cameras with different lens focal lengths, an iPhone can venture into Lytro territory, allowing for multiple focus points that can be selected after shooting. It also opens the potential for a flood of apps that make interesting use of the two cameras far behind Apple's built-in Camera app, given that Apple is also offering third-party developers access to RAW image data and the cameras' wide-color data.
Side-by-side cameras made stereoscopic photos and videos possible. While these are called 3D, they're created just like our binocular vision: two slightly separated images that allow us to reconstruct a receding set of objects and distances. The same approach allows true 3D scanning of a static object, moving a camera around it until an app recognizes it has enough imagery to stitch together a model.
It's also possible to compute motion details with two cameras shooting simultaneously and remove blur. That's right: A photo seemingly ruined by someone turning their head could be snapped back into static focus.
Multiple cameras also make it easier to create automatically stitched 2D panoramas. Apple's current Camera panorama mode is fussy and requires a stable hand and careful motion. Two cameras makes it easier for enough information to be captured that a photographer can be messier.
Sign up for Computerworld eNewsletters.