Reprojection could make your VR experiences lag-free

It's safe to say that despite the hype, virtual reality hasn't set the world on fire yet. That time may still come, but at the time of writing VR headsets are still more of a toy than an essential bit of household entertainment hardware.

Much of that is because affordable computers can only just about cope with the heavy demands that virtual reality places on its hardware. And when the virtual world lags and falls behind the user's movements, it makes that person feel really sick.

This is a problem that will be solved as technology continues to improve. Already it's much cheaper than it used to be to buy a VR-ready gaming PC. But an international team of researchers believes that it might have another solution to make virtual reality more accessible.

Reprojection

Thorsten Roth and Yongmin Li of Brunel University London's Department of Computing, together with Martin Weier and a team in Germany, have come up with a new image rendering technique that maximises quality while minimising latency.

It revolves around one of the main limitations of the human eye. The center of our field of view is sharpest, and the level of detail we can see diminishes as you go outward. That's why we tend to turn our heads while watching tennis, as opposed to just our eyes.

So, the team figured, why not bring down the detail in the outer parts of the image? 

"We use a method where, in the VR image, detail reduces from the user's point of regard to the visual periphery," explains Roth, "and then our algorithm – whose main contributor is Mr Weier – then incorporates a process called reprojection."

"This keeps a small proportion of the original pixels in the less detailed areas and uses a low-resolution version of the original image to 'fill in' the remaining areas."

Optimised rendering

To tune the algorithm, the team asked a bunch of people to watch a series of VR videos while tracking their eye movements. They asked them whether they noticed visual artefacts like blurring and flickering edges.

They found that the sweet spot was full detail for the inner 10° of vision, a gradual reduction between 10° and 20°, and then a low-resolution image outside of that. 

"It's not possible for users to make a reliable differentiation between our optimised rendering approach and full ray tracing, as long as the foveal region is at least medium-sized," said Roth.

"This paves the way to delivering a real-seeming VR experience while reducing the likelihood you'll feel queasy."

The full details of the work were published in the Journal of Eye Movement Research.

Duncan Geere
Duncan Geere is TechRadar's science writer. Every day he finds the most interesting science news and explains why you should care. You can read more of his stories here, and you can find him on Twitter under the handle @duncangeere.