We’ve now come to expect that every year when all of the major phone companies announce their latest models, that the camera will receive an upgrade. Each year, we can be sure that the mobile industry’s hardware engineers have gone that little bit further in order to one-up the competition. Maybe an extra megapixel or two, maybe a better sensor, or a faster shutter.
But it is only within the last couple of years or so that this hardware arms race has been upset. The battle to create the best camera is no longer a question of hardware (though it doesn’t hurt). The real battle, thanks to the rise of artificial intelligence, is in software.
Today, the most exciting innovations in photography are not thanks to engineers tinkering with metal and glass - they are thanks to algorithms that are making our shots better than ever, and helping us stay organised in the process.
Portrait mode battle
Perhaps the most striking consumer example of this is shooting portraits on our phones. For the last couple of years, some phones use dual rear cameras to sense the depth of the subject in the image, and then apply some sophisticated deep learning to automagically chop out the subject of the photos, and blur the background.
Even for ham-fisted amateur photographers, it can create some truly stunning results that are just begging to be posted on Instagram.
In the future though, AI could take the place of the second camera and enable similar “portrait” style shots with a nice blurry background (or “bokeh” to use the technical term) - but only using the one camera.
How would it work? By using machine learning to do the hard work and pick the objects out of the image. The same algorithms could be applied to the front-facing camera too, making our selfies even better.
Zoom and enhance
The advances with portrait mode though are just scratching the surface of how AI is transforming photography, and there are plenty of examples of how deep learning algorithms are improving our pictures, and doing things that might have once seemed impossible.
For example, if you want to shoot with a professional DSLR but are intimidated by all of the different settings and buttons, AI can come to the rescue.
Arsenal is a tiny device that plugs into your fancy camera (opens in new tab) - it enables you to not just connect your phone for taking wireless shots, but it can automatically choose the best settings for your shot, having trained its neural network on thousands of existing professional photos.
Photography is being transformed more broadly too: it turns out that Google Street View is probably a better photographer than you’ll ever be.
You might not have thought that photos taken automatically by a car could ever be particularly pretty, but the company trained a neural network on thousands of professional photos and the AI was able to divine the qualities of a decent photo, in terms of lighting, composition and filters and so on.
Then it simply fed in some shots its cars took in California, and the algorithms got to work manipulating the images to make them beautiful. And the results are stunning.
After, rather cheekily, Google then conducted its own Turing-test style experiment by showing some of its AI-created compositions mixed in with real pro shots, and around 40% of AI shots received similar grades to pro or semi-pro shots.
Photoshop creators Adobe has also been hard at work applying some machine intelligence to what it does. Popular Science details an app (opens in new tab) it has created that will take your selfies and intelligently modify them to appear as though they were taken with a telephoto lens; apparently de-emphasizing some features (like your massive honking nose) creates an altogether more flattering look.
According to the company, this sort of thing is already possible using existing tools, like the Adobe Fix mobile app, but AI automates the process.
The company is also teasing how AI could also be used within Photoshop itself - not replacing the need for a human to manually edit images, but making the process even easier. For example, in November last year the company demoed a sophisticated cropping tool. Rather than rely on the human user to slowly cut out an object from an image, AI is used to spot the objects and chop them out, ready to be manipulated however the user wishes.
And perhaps most amazingly? AI gets us closer to “zoom and enhance” - the impossible plot device used by pretty much every modern detective show - becoming a real thing.
PetaPixel reports (opens in new tab) that a website called LetsEnhance (opens in new tab) will take your small JPEGs and scale them up to be larger. Unlike, say, using photoshop though and simply dragging the low-resolution image to be larger, it will actually use AI to intelligently fill in the gaps and add details to your images - creating much prettier results in the process.
Sure, it might not be a good idea to have your key evidence in court based on images designed by AI - but at least it’ll make your low-res holiday snaps from 2003 look a little better when you print them.
So what’s clear is that in just a few short years, AI is transforming both the way we shoot photos - and how we edit them.
Perhaps the most exciting thing though is that this is just scratching the surface: Photography is just the beginning. As “computer vision” becomes an important, baked in part of other new technologies - such as self-driving cars - AI is going to become increasingly sophisticated when it comes to interpreting and understanding the contents of our images.
We don’t yet know how the tech industry will best make use of these new technologies - but what is clear is that in the future, your choice of camera app could be just as important as your choice of hardware.
- TechRadar's AI Week is brought to you in association with Honor.