How AI scene optimization works on your phone's camera

Huawei P50 launch
(Image credit: Huawei)

If you've got a flagship camera phone, you're probably wondering how it takes such incredible pictures. Phones brighten up shadows, blur out backgrounds for DSLR-like portrait shots, pump up colors and generally add pizzazz to pictures in a way traditional cameras don't. 

A big part of this process comes down to AI scene optimization, which you'll find on most smartphones today in some capacity – at its best on the best smartphones on the market. 

Camera phones are smart; after all, they're mini computers. The chip that powers your phone delivers processing power we couldn't have dreamed of a couple of decades ago. 

Smartphone makers couple that intelligence with the tiny camera module on the back of your phone to basically give it artificial eyes. With these eyes, your phone can understand what you're shooting, figure out which bits to optimize, and make your final photo Instagram-ready without the need for hours of edits. 

More impressive; in certain scenes, a smartphone camera can outperform a $1,000+ DSLR, thanks to all that intelligence and processing power crammed into its processor, and smart HDR effects. 

In short, this is a story about how adversity – tiny smartphone sensors – created the next evolution in computational photography.

What is AI scene optimization?

The clue's in the name. AI scene optimization is the process by which your phone optimizes a photo you take based on the scene you capture.

Talking through taking a photograph, when you point your camera phone at a subject, light passes through the lens and falls on the sensor. This light is processed as an image by the image signal processor (ISP), which is usually part of your smartphone chip, whether it's a Qualcomm Snapdragon 888 or an Apple A11 Bionic.

ISPs are nothing new. They've been on digital cameras for years, and do the same fundamental things on smartphones. 

Some of their jobs include feeding you an image to preview, reducing noise, adjusting exposure, white balance, and more. What's special about smartphones today, however, is just how smart those ISPs are. 

For starters, with improved computing capabilities, camera phone ISPs are able to understand what you're shooting better. While on an old DSLR or digital compact camera, you might have had to adjust your scene manually – remember those dials you had to switch from a running man icon to a palm tree icon? Now, smartphones can do that switching all by themselves. That's just scratching the surface though.

Samsung Galaxy S21

(Image credit: Samsung)

With AI scene optimization today, a camera phone can understand multiple elements in your picture and tune the processing for each of them very specifically.

For example, if you take a photo of a person in front of a grassy field with blue sky in the frame, your phone's ISP could brighten up their face given they're probably the subject, boost the greens in the grass to make them look richer and enhance the blues in the sky separately, and depending on your phone, possibly even soften the background slightly to pull focus on your subject.

Many smartphones today take things a step further. Sony's top-end smartphones, for example, have pet eye-tracking, while Huawei smartphones have automatic night detection that works so well on flagships, it can brighten up a virtually pitch-black scene, making night look like day. 

This technology has been made available on iPhones, along with Portrait Mode, which allows for maximum control over background blur and lighting effects when capturing a photo of a person, and Google's Pixel phones are renowned for their astrophotography. 

While not activated by AI, when fired up, it looks at the night sky and intelligently assesses how long to keep the digital shutter open so it can capture stars and even galaxies.  

Surely then, smartphones must pale in comparison to DSLRs when it comes to AI photography? Actually, they don't. Interestingly, camera makers are learning a thing or two about photo processing from camera phone engineers.

iPhone 13 Pro Max

(Image credit: TechRadar)

Small sensors; smart solutions

If you think back to 10 years ago, Nokia was releasing smartphones with big, powerful Xenon flashes like the Nokia 808 Pureview

It had to bump up the hardware spec of its camera phones because pictures taken on most phones at night looked like an atrocious, grainy mess. Even a phone like the Nokia Pureview wasn't cutting the mustard in really difficult scenes.

The reason mobile cameras have been so challenged is that phones need to fit in our palms and pockets, and so, need to be small. Smartphones also have to feature a bunch of other stuff – screens, speakers, batteries, antennas, and more. 

Tiny mobile sensors with tiny lenses struggle to get loads of light in. More light translates to a better image, and there lies the issue – small sensors, limited light-capturing capabilities, poor image quality. 

These limitations for mobile cameras forced phone makers to stop trying to fix the problem with better hardware that was expensive, oversized and battery-consuming, and look to software instead. 

One of the first times this really hit headlines was when Google released the Pixel, a phone with no optical image stabilization, but with such good electronic stabilization that it outperformed much of the competition. 

The Pixel and Pixel 2 then went on to showcase incredible photo processing capabilities that transformed photos from meh to wow in front of your very eyes by recognizing the scene.

This then led to brands like Huawei introducing neural processing units in chipsets, with the Mate 9 showcasing AI scene detection, and the feature finding its way onto phones by other smartphone makers, with Samsung phones detecting roughly 32 scenes.

Huawei Mate 40 Pro

(Image credit: Huawei)

What can't AI scene detection do?

And that's how smart people solving a photography challenge made something impossible to overcome with hardware alone – high-quality smartphone photos in all lighting conditions – possible using software and AI scene detection.

The next frontier is AI scene detection in video. While it is already available to a degree, the super-smart nighttime photo capabilities used in photos by Apple, Google (Night Sight), and Huawei haven't completely made the leap to seriously crisp night video. 

A video features at least 24 images a second, and so that's another level of processing power altogether.

As processors get more powerful, AI development reaches new heights, and smartphone sensors become better able to capture light despite their small sizes, AI scene detection looks set to continue changing the face of photography for all.

Basil Kronfli

Basil Kronfli is the Head of content at Make Honey and freelance technology journalist. He is an experienced writer and producer and is skilled in video production, and runs the technology YouTube channel TechEdit.