You can do an awful lot with digital imaging to make things look better. Take an editing program like Photoshop, for example. Even if you’re not a professional, it’s relatively easy to tweak and fine-tune your less-than-perfect shots and make them look a little more presentable.
If you’re an imaging-editing hotshot, meanwhile, it’s entirely possible to transform substandard pictures into minor masterpieces. Retouching has therefore been good news for anyone wanting to get rid of rough edges in a shot, particularly if they’re selling something that needs the subject matter to look, well, perfect.
British innovation outfit Cambridge Consultants, however, are taking that thinking and turning up the repair process to 11. The company, based unsurprisingly in a science park in the flatlands of Cambridge in the UK, has just unveiled the rather science-fiction sounding DeepRay.
This is technology based around artificial intelligence (AI) that can produce sharper and much less distorted images from pictures that have been damaged or had elements obscured. More impressively, DeepRay uses AI to tackle live video too.
Smart thinking
Why would a company want to invest time and money in doing something like that? Well, when you start to think about potential uses for this sort of application it all starts to make sense.
DeepRay has oodles of potential, and the company cites the world of autonomous driving, as well as uses in healthcare and medical imaging as a few of its potential target markets. And, the company seems to have so much faith in its new system that it reckons DeepRay can outperform humans when it comes to scrutinizing imagery, be it a still shot or moving footage.
Tim Ensor, commercial director for artificial intelligence at Cambridge Consultants said: “This is the first time that a new technology has enabled machines to interpret real-world scenes the way humans can – and DeepRay can potentially outperform the human eye. This takes us into a new era of image sensing and will give flight to applications in many industries, including automotive, agritech and healthcare.”
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
The company has a video that showcases the powerful potential of DeepRay and, with a little bit of imagination it is possible to see where the idea could be used to good effect. Think, for example, of autonomous vehicles with on-board cameras that will be subjected to obscured or distorted views due to weather conditions and dirt obscuring their lenses and other vital sensors that view the road ahead. DeepRay might be able to use its deep learning technology to help those cameras see more clearly.
Promising results
Currently the system doesn’t appear to be perfect, with the results we’ve seen looking passable but flawed. However, the point is that the technology is heading in the right direction. “The ability to construct a clear view of the world from live video, in the presence of continually changing distortion such as rain, mist or smoke, is transformational,” adds Ensor. “We’re excited to be at the leading edge of developments in AI. DeepRay shows us making the leap from the art of the possible, to delivering breakthrough innovation with significant impact on our client’s businesses.”
Of course, purists might claim that all this jiggery-pokery isn’t really the way to do things and, when it comes to correcting images then they might have a point. In the same way as Adobe’s AI and machine learning framework called Sensei can magically transform images, among other things, DeepRay could unfairly be seen as being a bit of a cheat. But, unlike Photoshopped magazine covers, this isn’t a system that is being used to fool the eye into thinking an image looks better than the real thing.
In fact, if DeepRay is able to read complex visual situations and correct live video then the benefits are obvious. It’s a potentially very practical solution. And, if the system can actually outperform the human eye then perhaps an autonomous vehicle might be a lot safer having it on-board. DeepRay, reminds Cambridge Consultants, is able to learn what real-world scenes and objects look like and, similarly, judge how they appear when various distortions are applied.
From there, the system can tackle distorted images it has never seen before and form a real-time judgement of the actual scene. DeepRay has the capacity to see through the distortion using extensions of the complex-sounding Generative Adversarial Network (GAN).
Part of the training process includes six neural networks competing against each other in teams. These teams invent difficult scenes and situations and subsequently attempt to remove any distortion that might be present.
Technical revelation
The company says that this sort of end-to-end training using so many networks together has only been possible in the last couple of years. Cambridge Consultants are hardly newcomers to the field of innovation either. They’ve been in existence for over 50 years and currently boast a workforce of more than 800 staff.
So if the research being carried out to develop DeepRay helps us all see more clearly in the years to come, albeit digitally, then so much the better. Our future vision might just depend on it.
Rob Clymo has been a tech journalist for more years than he can actually remember, having started out in the wacky world of print magazines before discovering the power of the internet. Since he's been all-digital he has run the Innovation channel during a few years at Microsoft as well as turning out regular news, reviews, features and other content for the likes of TechRadar, TechRadar Pro, Tom's Guide, Fit&Well, Gizmodo, Shortlist, Automotive Interiors World, Automotive Testing Technology International, Future of Transportation and Electric & Hybrid Vehicle Technology International. In the rare moments he's not working he's usually out and about on one of numerous e-bikes in his collection.