Is The iPhone 7 Plus Apple’s First Step To Augmented Reality? Not Likely.

It was a reasonable question to ask. Apple purchased a 3D imaging company called PrimeSense in 2013, and now has released a phone with two camera lenses and the ability to perceive depth of field, the base requirement for capturing 3D spaces and objects. Add that to the fact that Apple CEO Tim Cook has made several statements expressing his interest in augmented reality.

So it’s natural to wonder if the iPhone 7 Plus represents Apple’s first steps toward some augmented reality product that could place 3D objects over a real-world space. Microsoft’s HoloLens headset already does just that, as does Lenovo’s PHAB 2 Pro, the first product to embody Google’s Project Tango AR technology.

The iPhone 7 Plus features a 28mm wide angle lens and an additional 56mm lens (which technically is not the telephoto lens that Apple’s promo materials suggest, but rather something like a “portrait” lens).

The two cameras can work together to measure a limited amount of information about the depth of field of objects in the frame. But they are limited in their usefulness for capturing AR imagery for two main reasons—they can only get depth of field data on image elements they can each see, and they are placed too closely together on the back of the device to get much range.

The Limits of Dual-Lens

The camera modules used in the iPhone 7 Plus, sources tell me, are almost certainly developed by an Israeli mobile camera module company called LinX that Apple acquired in 2015. Before the acquisition, LinX claimed its smartphone camera modules could deliver DSLR-level photo quality while taking up so little space as to preserve the thin profile of the phone. These claims are very similar to the ones Apple makes about the iPhone 7 Plus’s camera.

LinX also boasted about its dual-lens technology’s ability to capture depth of field in images so that the background in an image could be affected independently of the foreground, or even replaced by another background. This, of course, describes the trick that Apple is most proud of in the iPhone 7 Plus camera, in which the foreground image (a person) is put sharply in focus while the background is blurred. Apple calls this “Portrait” mode.

To create this effect, the two cameras must work together to capture enough image data to form the reasonable assumption that the thing in the foreground is a person. They may be able to recognize things like the edges of the head. The camera might use an algorithm to recognize aspects of a human face. Once it’s identified the object in the foreground as a person, it can bring that area of the image into sharp focus. It can also assume that everything else in the image is background, and can then blur the background.

But here’s the rub. In order to establish depth of field for a given area of an image, both lenses must be able to independently and uniquely identify points in those areas. When the software can match those identified points, the camera can capture a 3D rendition of the things in the frame.

This, however, relies a lot on the variety of the objects and surfaces within the frame. If the camera is pointed at a flat white wall where there is little variety, for example, the camera software will have trouble finding matches between points captured by each lens. So not much depth information would be collected.

Capturing 3D images and mapping local environments for AR content requires hardware that can provide more refined and consistent depth of field data. The camera sensors should detect and assign horizontal coordinates (x and y) and a vertical value (z) to surfaces and objects in its field of view. The camera must be able to locate itself within a space, even as it moves around.

Camera placement

The placement of the two cameras on the iPhone 7 Plus also raises doubts about its usefulness for AR. The rule of thumb says the distance between two identical lenses, times 10, equals the distance from the camera within which it’s possible to capture depth of field. So if the distance between the lenses is one inch, together they can define three-dimensional objects in a space of 10 inches or less in front of them.

The two lenses on the iPhone 7 Plus’s camera appear to be roughly a centimeter apart. If the lenses were identical, that would mean the camera could detect depth of field within 10 centimeters.

But the iPhone 7 Plus’s lenses are not identical—one is wide angle and the other “portrait”—so the “distance times 10” equation may not neatly apply. An AR developer told me that with some heavy software calibration, the iPhone 7 Plus camera may be capable of detecting depth up to 50 times the distance between the lenses. Even then, 50 centimeters in front of the camera isn’t much range.

If the point of the two cameras was to capture imagery for AR apps, Apple would have put them farther apart in the design.

No PrimeSense Sensor

Devices equipped for AR normally use an active sensor to measure depth of field (like Microsoft’s HoloLens, for example). These sensors send out a beam of light and measure the time it takes for the light to bounce off objects. That time interval determines the location of objects in the camera’s field of view.

Active sensor technology is exactly what Apple bought in its PrimeSense acquisition (“structured light” sensors, to be exact). But that PrimeSense sensor was not used in the iPhone 7 Plus. The iPhone 7 Plus uses the same class of passive sensor used in most traditional cameras. Passive sensors, which work by pulling in light from the environment, capture less depth of field information, less reliably.

SciFutures CTO Scott Susskind believes the fact that Apple didn’t use the PrimeSense technology in the iPhone 7 Plus indicates that Apple had no AR ambitions for the phone. “PrimeSense and their depth-sensing technology . . . is a far superior solution for AR tracking than the dual cams in the 7 Plus,” Susskind said in an email to Fast Company.

Susskind said it’s possible Apple applied PrimeSense’s computer vision algorithms to the iPhone 7 camera, but added that simply using the PrimeSense hardware sensor instead would have produced far better 3D-tracking capabilities.

What Tim Said

And we’re talking about the AR experience on a phone. Despite the success of Pokémon Go, I doubt that people in Jony Ive’s Industrial Design Group are very excited about a phone-based augmented reality experience. It just doesn’t doesn’t line up with what we’ve seen of their sensibilities. It’s a clunky experience in which the phone creates a barrier between the user and the world around, including other people.

If you listen closely, Tim Cook’s comments about AR all but confirm this. Cook said this on Good Morning America: “[AR] gives the capability for both of us to sit and be very present, talking to each other, but also have other things—visually—for both of us to see. Maybe it’s something we’re talking about, maybe it’s someone else here who’s not here present but who can be made to appear to be present.”

Cook’s comments seem to point toward a preference at Apple for pursuing augmented reality products, rather than an Oculus VR-like experience where the user loses visual contact with the real world entirely. But his comments also suggest that the type of AR Apple’s interested in is not one where the user is focusing their attention on the screen of a smartphone or tablet. Vuforia (PTC) president and GM Jay Wright told me he thought Cook’s comments suggest an experience you might get from some form of AR headset or glasses.

I don’t doubt that Apple is very interested in some form of augmented reality, and I’m confident that there’s a workbench in some undisclosed location (in some nondescript office park in the Valley) that’s loaded with prototype AR devices. But I doubt those devices are tablets or phones.

 

Fast Company , Read Full Story

(40)