The quiet revolution that's making your phone smarter than you at photography

Artificial intelligence (AI) is making its way into almost every aspect of our lives. It’s our phones, while many of us have let it into our homes in the form of voice assistants in smart speakers.

Those are just the most visible implementations of AI, and over the coming years it will be increasingly used behind the scenes, in the cogs that will keep our increasingly smart cities running. 

However, it’s in imaging and photography that you get to see AI working its magic most clearly. Let's look at some of the best examples of this new technology actively enhancing photography. 

Thin AI camera

The most frequently marketed version of photographic AI today is in smartphones. Many new and recent models have AI-assisted features that use various kinds of scene and object recognition to enhance your photos. 

Different phone-makers have differing approaches and distinct features, though – with some interesting deviations in the approaches.

Huawei AI

Few companies shout as loud about AI in phones as Huawei – it's a top-billing bullet point if you're looking to buy a handset like the Huawei P20 Pro

There’s a separate AI shooting mode in the camera app that, in the Mate 20 Pro, can recognize 1,500 different scenes and situations. The processing then applies a color and contrast profile to suit, to make your images really pop.

An ultra-natural look is not the aim here. Huawei AI photography maxes-out color saturation for greater impact, not maximum fidelity. The results go down well on social media, though. 

Apple Portrait Lighting

Portrait Lighting is one of Apple’s AI-assisted photography features, and emulates the effect of various kinds of studio lighting. A few other phones, like the Huawei P20 Pro, have also had a crack at this concept, but Apple has delivered the best implementation to date. 

So why is it AI? Portrait Lighting involves creating a 3D depth map of the subject’s face, then applying a 3D filter to add lighting effects that follow the contours of their features, as if they were being lit by a studio lighting setup. 

Apple calls it a “studio in your pocket”. It won’t replace a studio, of course, but the results can be surprisingly effective.

Google Lens

Google is the most prolific developer of AI technologies, and several of the camera modes on its phones flirt with AI-like processing. 

The Pixel 3's Top Shot, for example, is a burst mode that captures a series of images and then chooses the ones it thinks are the best. Photobooth does the same sort of thing, but for photos of you and your friends pulling faces. Or smiling. 

Google Lens is a more dynamic demo of AI, though. It’s a camera mode that taps into Google’s image and text recognition, putting them into a real-world context. You can point your phone at products, landmarks and even wallpaper patterns, and Lens will try to find them online and provide relevant information. 

Google Photos

There’s a more practical, and almost hidden use for the neural network that goes into Google’s image recognition – Google Photos, and in particular its search function. 

At the top of the Google Photos app you’ll now see a search bar. You can type objects or themes into it and Google’s AI algorithms kick in to find you relevant images. Try it out. 'Dogs', 'Christmas', and even 'cheese' will return relevant photos, if they're in your photo library. 

Photos also plays curator, choosing images to turn into animated GIFs, to enhance with filters and stitch into panoramas. All of this is based on an advanced kind of image recognition that is, in some circles, considered AI. 

Nvidia image restoration 

Some implementations of AI can feel ordinary almost instantly, but there are some applications in the works the feel genuinely futuristic. 

Nvidia’s image enhancement techniques are some of the most impressive real-world visual demonstrations of contextualized AI, and there are three ways in which it's implementing the tech that promise great things. 

The first is 'de-noising' of images. It uses a deep learning-based method for restoring image data obscured by noise, or even text. This is actually a pure, and very advanced, version of what phone cameras do when removing image noise from a photo. 

However, it’s informed by a neural network trained by exposure to masses of other images, which helps it to recognize patterns, and interpolate data missing from the source image. 

There’s a more dramatic demonstration of the power of Nvidia’s neural network too, in the form of AI in-painting. In the demo, parts of a source image are removed and re-drawn, the missing information interpolated through, again, the image-trained neural network. 

Finally, Nvidia can turn 30fps or 60fps video into slow-mo 480fps footage, 16 times slower than standard. As with the in-painting technique, AI is used to create image data that's simply not there in the source footage.

TVs actually have comparable interpolated frame modes. However, Nvidia's AI can handle, for example, the flow of fabric much better, for more natural-looking results. 

Iconem

Current AI is largely a lot of small, not hugely 'intelligent', calculations that, when applied on a large scale, produce amazing results

This description is certainly true of Iconem too, a heritage startup that uses Microsoft-developed AI. 

Iconem’s goal is to record images of important historical sites under threat from war, erosion or other kinds of damage, creating a life-like record of their current state. 

The AI’s job here is to map tens of thousands of photos onto a scanned 3D model of a heritage site, using drones to capture the required images. Iconem visits sites that are difficult to access or dangerous, such as the Great Umayyad Mosque in Aleppo in war-torn Syria.

You can see some of Iconem’s scans in action on YouTube, and its scans of Aleppo have been made into an app, available from Google Play

Google BigGAN

You may know Google for its search engine and Android mobile operating system, but it's also developing mountains of innovative new technologies through its Labs programs and offshoots.

DeepMind is Google parent company Alphabet’s AI division, and BigGAN is one of its latest projects – devised by an intern, no less. This is a piece of AI software that generates images using algorithms. 

First, an image is generated algorithmically from a random number. The image is then compared to a 'real' image to analyze how close it is. A new version of the image is then created, in an attempt to make its characteristics closer to that real-world reference. 

You can check out some of BigGAN’s results online. It can create some amazingly realistic natural textures, such as grass and tree lines. However, BigGAN-generated human faces still look like the stuff of Francis Bacon’s nightmares. 

  • Brought to you in association with Nokia and Android One, helping you make more of your smartphone. You can learn more about the new Nokia 7.1 here, and you'll find more great advice on getting the most from your phone here.
Andrew Williams

Andrew is a freelance journalist and has been writing and editing for some of the UK's top tech and lifestyle publications including TrustedReviews, Stuff, T3, TechRadar, Lifehacker and others.