Google’s amazing new photo AI brings light to darkness and much much more

Digital camera taking photo in low-light conditions
(Image credit: Unsplash)

Photographers may soon be able to effectively ‘see in the dark’ after Google Research added a new AI noise reduction tool to its MultiNeRF project.

The RawNeRF program can read images, using artificial intelligence to add higher levels of detail (and far fewer unsightly artifacts) to photos taken in darker conditions and low-light settings. According to the team behind the project, it works better than any other noise reduction tool out there. 

“When optimized over many noisy raw inputs, NeRF produces a scene representation so accurate that its rendered novel views outperform dedicated single and multi-image deep raw denoisers run on the same wide baseline input images,” the researchers explained in a Cornell University paper. 

What is NeRF? 

NeRF is a view synthesizer - a tool capable of scanning thousands of photographs to reconstruct accurate 3D renders

According to Ben Mildenhall, one of the project researchers, NeRF works best with well-lit photographs and low noise levels. In other words, it’s built for day-time shooting. 

Low-light and night shoots proved problematic, hiding details in shadow or becoming noisier when upping the brightness in post. The issue Mildenhall and the team found was that denoising tools can somewhat reduce the noise, but at the cost of image quality.

With the advent of RawNeRF, artificial intelligence is set to quieten the noise without stripping away the detail - effectively letting shutterbugs ‘see in the dark’.

In a video demonstration, NeRF in the Dark, - originally published in May 2022 and going largely unnoticed at the time - Mildenhall takes a cell phone lit only by candlelight. RawNeRF is “able to combine images taken from many different camera viewpoints to jointly denoise and reconstruct the scene,” the Google Researcher explains.  

RawNeRF denoiser side by side comparison

Original (L) vs RawNeRF (R) (Image credit: Google Research)

Reconstructed images are rendered in a linear HDR color space, letting users further manipulate angles, exposures, tonemapping, and focus. In his video, Mildenhall notes how varying each of these together “creates an atmospheric effect that can bring attention to different regions of the scene.”

While still in the research phase and not an officially supported Google product (yet), RawNeRF offers a tantalizing glimpse of how AI could help creatives better reflect the world around them.

Steve Clark
B2B Editor - Creative & Hardware

Steve is TechRadar Pro’s B2B Editor for Creative & Hardware. He explores the apps and devices for individuals and organizations that thrive on design and innovation. A former journalist at Web User magazine, he's covered software and hardware news, reviews, features, and guides. He's previously worked on content for Microsoft, Sony, and countless SaaS & product design firms. Once upon a time, he wrote commercials and movie trailers. Relentless champion of the Oxford comma.