Reprinted from TidBITS by permission; reuse governed by Creative Commons license BY-NC-ND 3.0. TidBITS has offered years of thoughtful commentary on Apple and Internet topics. For free email subscriptions and access to the entire TidBITS archive, visit http://www.tidbits.com/ Behind the iPhone 7 Plus's Portrait Mode Glenn Fleishman The soft-focus Portrait mode for the iPhone 7 Plus that Apple promised for the iPhone 7 Plus has arrived with the developer release of iOS 10.1. A public beta will follow soon. This new mode makes use of both lenses in the iPhone 7 Plus to identify objects, calculate layers of depth, and then silhouette the closest layer and render the rest out of focus. (For more on the iPhone 7 camera changes, see '[1]iPhone 7 and 7 Plus Say 'Hit the Road, Jack.',' 7 September 2016.) Portrait mode appears alongside modes like Pano and Slo-Mo in the Camera app, but during the beta, the first time you select it, an explanation appears with a Try the Beta link to tap. After that, it's just another option. (TidBITS typically doesn't report on developer betas, but Apple allowed some publications to test and write up the Portrait feature, making it fair game.) This soft-focus portrait approach is often called 'bokeh' (pronounced 'boh-keh'), a borrowed Japanese word that describes the effect of creating a close, shallow depth of field ' the portion of an image in focus ' while everything in front of and behind the primary object is very blurry. This approach mimics how our eyes process a person or object seen up close, and adds a kind of visual snap that can be beautiful or gimmicky, depending on the composition. (If you don't own an iPhone 7 Plus, or if you do and can't wait for Portrait mode, there are existing apps to simulate bokeh ' see '[2]FunBITS: How Out-of-Focus Photos Can Be Works of Art,' 28 February 2014.) Bokeh typically requires an expensive telephoto lens paired with a mirrorless or DSLR camera. Apple's simulation tries to bring that palpable sense of an expensively captured image to its tiny-lensed camera system, which includes 4mm wide-angle (29mm equivalent at 35mm) and 6.6mm 'telephoto' (58mm equivalent) lenses. (Apple is technically accurate in calling the 6.6mm lens a telephoto, but most photographers consider only lenses 70mm or longer to be telephoto.) Apple's new Portrait mode isn't bad, even in beta. In my brief testing, and in looking at photos others have posted, it works better with people and animals than objects. That makes sense, because objects inside the images have to be recognized, and Apple clearly optimized the mode for what it put on the label: portraits. [3][tn_Portrait-mode-woman.jpg] [4][tn_Portrait-mode-objects.jpg] The Camera app provides useful cues when you're setting up the shot. If you're too close (within a foot) or too far away (more than eight feet), an onscreen label tells you to move. It also warns you if there's not enough light for the shot, as the telephoto lens is just Æ/2.8, a relatively small aperture for a lens that tiny, while the wide-angle lens is Æ/1.8. Like HDR, Portrait mode saves both the unaltered image and the computed one into the Photos app, so you don't lose a shot if the soft-focus effect fails. The math behind Portrait mode is cool. You don't need two identical lenses to calculate depth. You just need lenses where the software knows the precise characteristics of each, such as their angle relative to each other and the kind of lens distortions. The depth-finding software compares photos from each lens and identifies common features across the image using Apple's increasingly deep machine-learning capabilities. It uses those common features to compare points between the two images, adjusted for what it knows about the cameras, enabling it to approximate distance. Because the depth measurements aren't exact, the stereoscopic software divides features into a series of planes, rather than placing each feature at a precise distance. These planes contain the outlined edges of every element in the scene. (For more, consult [5]this very readable technical summary of the approach from 2012.) A two-camera bokeh feature has appeared previously in Android smartphones, so this isn't brand-new technology. But images from the earlier two-camera smartphones show much more variability than what I see from my testing and early published examples. A more advanced state of machine object recognition and the combination of the iPhone 7 Plus's image signal processor and super-fast A10 processor enable it to preview the effect accurately and then capture it instantly. Even outside of Portrait mode, the iPhone 7 Plus already performs some tricks in combining shots between its wide-angle and so-called telephoto lenses to produce a single image. This 'fused' image is synthetic, but not artificial: it doesn't introduce detail, but it combines aspects of each separately, simultaneously captured image. I've found that the iPhone uses this approach primarily in good lighting conditions, where the telephoto captures the scene and the wide-angle lens adds luminance information. The larger aperture of the wide-angle lens lets it capture detail better in darker areas and reduce the speckling caused by the telephoto lens's image sensor not receiving enough light. (This is the same reason you're warned about insufficient light when composing a Portrait photo.) These combinations and the Portrait mode all fall into an evolving field known as computational photography. High-dynamic-range (HDR) images are the best-known example, combining multiple successive shots at different exposures into one image with a sometimes supernatural-looking tonal range. The [6]Light L16 camera is an extreme example of what's possible: when it ships, it will have sixteen lenses across three focal lengths to create huge, high-quality, high-resolution images. You may also have heard of [7]Lytro, which made consumer and pro cameras that used image sensors to calculate the angle of incoming light rays to allow refocusing an image after a picture was taken. That approach was a little too limited and wacky, and the company discontinued its cameras. For the moment, Apple isn't making this depth-finding output available to third-party developers, who also can't access both cameras at once as separate streams to do their own processing. Third-party apps can grab the image sensor data as a raw Digital Negative format file, and several have already been updated for this. But raw files can be captured from only one lens ' if a developer wants to use both, Apple provides a fused JPEG. (Raw image support is available on the iPhone 6s, 6s Plus, SE, 7, and 7 Plus, and the 9.7-inch iPad Pro.) Apple's Portrait mode is just the first computational method I expect that we'll see with the iPhone 7 Plus, since there is so much more you can do with the ability to compute depth real time, from using the iPhone as an input for motion capture or gaming control to 3D scanning of objects. And, if developers are lucky, Apple will open up some dual-stream or dual-image capture options that could result in a blossoming of even more new ideas. References 1. http://tidbits.com/article/16738 2. http://tidbits.com/article/14555 3. http://tidbits.com/resources/2016-09/Portrait-mode-woman.jpg 4. http://tidbits.com/resources/2016-09/Portrait-mode-objects.jpg 5. https://en.ids-imaging.com/whitepaper.html?file=tl_files/downloads/whitepaper/IDS_Whitepaper_3D_Stereo_Vision.pdf 6. https://www.light.co/ 7. https://www.lytro.com/ .