Apple testing Deep Fusion in iOS 13 developer beta for iPhone 11 and iPhone 11 Pro

0

Apple will launch the new Deep Fusion feature on iOS via a software update for beta users, presumably with iOS 13.2 version. Deep Fusion combines multiple exposures at the pixel level to offer users a higher level of detail than standard HDR imaging.

Deep Fusion is a fascinating technique that extends Apple’s philosophy on photography as a computational process, in fact already with the iPhone 7 Plus, Apple was able to merge the wide-angle lens and telephoto lens output to provide the best possible result.

Apple says the overall result translates into better skin transitions, better details on clothing and better freshness at the edges of moving subjects.

The next beta version for developers will introduce Deep Fusion and support the iPhone 11 and in the iPhone 11 Pro and Pro Max to improve photos taken with the wide-angle. Unfortunately, for now, the ultra-wide-angle sensor is not supported. According to Apple, Deep Fusion requires an A13 chip and will not be available on previous iPhones.

Report breakdown:

  1. By the time you press the shutter button, the camera has already grabbed three frames at a fast shutter speed to freeze motion in the shot. When you press the shutter, it take three additional shots, and then one longer exposure to capture detail.
  2. Those three regular shots and long-exposure shot are merged into what Apple calls a “synthetic long” — this is a major difference from Smart HDR.
  3. Deep Fusion picks the short exposure image with the most detail and merges it with the synthetic long exposure — unlike Smart HDR, Deep Fusion only merges these two frames, not more. These two images are also processed for noise differently than Smart HDR, in a way that’s better for Deep Fusion.
  4. The images are run through four detail processing steps, pixel by pixel, each tailored to increasing amounts of detail — the sky and walls are in the lowest band, while skin, hair, fabrics, and so on are the highest level. This generates a series of weightings for how to blend the two images — taking detail from one and tone, color, and luminance from the other.
  5. The final image is generated.

There is currently no way to deactivate this feature, which takes about 1 second to process. If you quickly click and then preview the image, it may take about half a second for the image to update to the new version. A process that will still be very fast and hardly noticeable to many users.

Follow us on Twitter, subscribe to our Facebook Page, find us on LinkedIn, circle us on Google+