iOS 13.2 public beta arrives with iPhone 11 Deep Fusion camera feature

iPhone 11 Pro
(Image credit: Future)

Apple isn't done impressing everyone with the iPhone 11 cameras, and its next act is one step closer to launching to the general public: Deep Fusion photography.

Deep Fusion, now available through the iOS 13.2 public beta, is Apple's fancy name for machine-learning computational photography – it relies on AI to sort through several frames taken during and even before the shutter button is pressed and create a the best composite image.

The new camera feature is available in both the iOS 13.2 developer beta and now the iOS 13.2 public beta – if you've upgraded to the iPhone 11, iPhone 11 Pro or iPhone 11 Pro Max. It's just in time for Apple to hedges its bets for the October 15 unveiling of the camera-focused Google Pixel 4.

How does iPhone 11 Deep Fusion work?

The Deep Fusion is similar shooting high dynamic range (HDR) photos: it takes a series of shots using different exposures and then combines them all into one image. HDR combines the highlights and shadows. In the case of Deep Fusion, the image processing is focused on texture, detail, and noise (the photographic kind).

Deep Fusion will take nine separate exposures using two of the iPhone 11's cameras together, and it will then process them pixel by pixel to combine them into a single image. That processing is all backed by machine learning and powered by the A13 Bionic chipset.

The result, in theory, is sharper more detailed photos from the iPhone 11 and 11 Pro cameras. And, Deep Fusion will work automatically, detecting when a scene would benefit from it. We say "in theory" because the Deep Fusion feature hasn't rolled out to us for review purposes either.

We'll have to wait and see what Deep Fusion has in store when it eventually launches on consumer builds of iOS 13. We may see it before the end of October, when Apple typically launches a big iOS update.

Matt Swider