Proper virtual reality (VR) headsets have only been available for a few months to a year, and it's already the next big thing in tech that we can't stop hearing about. But, this has happened before - at least a few times.
What makes this time different, and will VR finally stick this time as the future of not just gaming, but social networking, computing and - frankly - everything?
During his IFA 2016 keynote address in Berlin, AMD Senior Vice President and CTO Mark Papermaster showed his hand in all this pretty clearly. (Really, it's a fascinating hour-long talk that you can watch right below.)
And, considering Papermaster has even contributed his talents to IBM's PowerPC chipset design – you know, the one Apple used for so long in its Macintosh – you'd be smart to heed his words on where this is all headed.
So, we did just that in a candid conversation following the keynote, in which Papermaster dishes on what he thinks it will take for VR to hit mainstream before it fizzles out - like it's done too many times before.
TechRadar: You mentioned on-stage that a certain gap had developed between the power of AMD's processors and the competition. Why do you think that gap had occurred, and how much of an inspiration was VR to close that gap?
Mark Papermaster: Why the gap developed is that AMD was focused on what was certainly a huge trend in the market, and that was smartphones, tablets [and] mobility. And, there's nothing wrong with that focus.
The roadmap at the time had a trajectory that really improved all of the power efficiency of AMD's CPUs. So, it's not that it was wrong, but it was tailored to those markets. And, it didn't allow us to be as competitive as we needed to be for high-performance desktops, which is where you're going to power the most demanding virtual reality for folks, which we saw as emerging – and, of course, server.
What we did several years ago, I joined in late 2011, is that we reset the strategy to be about high performance – to make sure, as a company, that we provide compelling high performance and then design it in a way that is scalable.
If you can hit high performance, you can hit the kind of markets that our customers need to hit their most demanding workloads. And then, with the scalability we designed in, you can then bring it into mainstream, you can bring it into value markets, and it scales just fine.
Remember what I said about power management, the good news about all that we did there is that we didn't throw any of that away. So, basically, we redesigned our CPUs for high performance and applied all of the power management to it.
With Radeon, we did the same thing. With Radeon, it's all about balancing performance. You have to have how many teraflops – of course, we have to optimize for that. But, in the end, same story there: we applied our power management technique.
And, if you look at our RX 480, it's tremendous. Under gaming workloads, you can save up to 40% power from the previous generation.
You noted that PC hardware is going to be what pushes this medium forward toward full immersion. But, I noticed that, during the keynote, mobile wasn't mentioned a ton in terms of the whole ecosystem. We have a number of mobile devices out there powering these experiences to varying degrees of success. How important do you think mobile VR experiences are to pushing this medium forward, with full immersion being the goal?
First off, it's extremely important. The initial capabilities, when you show the art of what's possible, that has to start with the maximum performance capabilities to really show, in today's technology, how lifelike you can be, because that sets the bar.
And, mobile is extremely important because, when you look at what we're actually doing, we just showed a product that imagines running VR from a PC in a backpack. And, we can even further shrink this technology over time. So, what you'll see is, just like in the pure PC space today without virtual reality, you see a whole range of product SKUs from mobility right to the high performance desktop.
What you should expect to see is that mirror into the VR-ready applications. So, VR-ready will move right down into mobile probably faster than most people think.
During the keynote, you mentioned this "64X [from today's capability], 1,000 teraflops" as sort of the gold standard for a fully immersive VR experience.
Well, [it's] a gold standard because, once you hit that level, your body cannot distinguish between what you see today looking through your eyes versus what the computer is generating.
So, technically speaking, how did you come to that figure? Is that research you're doing internally?
Yeah, that's an internal prediction, based on information we've looked at as to simply the resolution of the eye. I'll tell you, I've looked at other companies' projections, and there is some range here. It's a bit subjective.
It depends on the application. So, don't forget, virtual reality is a very demanding application. If you're staring at your smartphone, with the distance you have, let's say a tablet or even a laptop, and you have a two-dimensional image you're looking at with a longer separation.
Think about that as opposed to a head-mounted display, where that display is right into your focal line. So, it's actually just a few inches away from your eye, where you can discern much more of the pixelation.
You may hear different numbers, but our projection is based on our understanding of VR and its interaction with the human body.
It's been said that 90 frames per second (fps) is the gold standard, to use the term again, for successful VR. So, what do you think you could gain from, say, a 120 fps experience or beyond that?
No - once you hit. With the products we have today, we're not seeing that the human can discern a difference once you get above 90. It's just not picking that up.
To be honest, even when you watch VR that's, say, 30 or 60 frames per second, if you did a demo with that, you'd say, "Well, I didn't see a difference." But, actually you feel a difference.
Without a latency of below 10 milliseconds and a frame rate of at least 90 fps, although it looks nominally just fine to you, your body knows that something is amiss.
You made mention of Moore's Law in the talk, and I know there's the obvious fact in that we have to increase by X every number of years to continue going forward. But, do you think there's any other way Moore's Law is going to play into this – are we going to have to get creative going forward, in terms of the interpretation of Moore's Law, to get these exponential increases?
There absolutely will be different interpretations. So, at that point it turns toward what I call "Moore's Law Plus." Why do I call it that?
What it means is the transistor alone shrinking, as it historically did, to give you a doubling of the density at the same cost every 18 to 24 months – that is gone. Because, even in new technology nodes, like FinFET, where we have the new RX 480 graphics and we have the new Zen core, that is a tremendous density and low power vantage, so the costs go up with these new technologies.
So, it's not on Moore's Law. Moore's Law Plus means that, in some instances, it costs more. But even more so, Moore's Law Plus says that creativity will never stop. And so, it will be ingenuity at the system level to put solutions together.
It might be combinations of CPU and GPU, other accelerators, different memory configurations, how they're pieced together – there's room for lots of innovation at the next level.
To that end, I'm wondering about the graph on screen during the keynote that called out 2025 as the point at which we'll reach full immersion. Is that specifically the point in time at which you think we'll be able to achieve full immersion, and why do you think it'll take that long?
Well, it may not be. The rate of improvement really will follow a Moore's Law type of pace. It will be, again, Moore's Law Plus. It won't just be the semiconductor transistor, but a combination of design, the semiconductor and how you architect those system solutions that will keep us on that pace.
It takes multiple factors, though. You need the compute, so the CPU and graphics unit needs to scale – there I have no doubt. And, I can tell you we're quite confident from an AMD standpoint on how to continue that curve.
But, we do need the same advances in display technology as well to get these form factors. We'll also need very, very high bandwidth and low wireless connections to the device, because, to get a lifelike, full presence.
That computing, particularly in initial capabilities, you won't be able to wear that on your device that's providing that immersive experience, whether that be a head-mounted display or just a pair of glasses.
We'll need a combination of technologies to deliver that.
We've seen VR pop up for the past two decades or so in various inflections throughout that time; of course, Nintendo comes to mind. They came up in during times in which they seemed ahead of their time because the rest of the industry in terms of all the supporting components just weren't there to keep pushing it.
Now, it seems that we're in a better place than ever for this to take off, but still, it seems like we're in this race against time where you have this power-versus-price, chicken-and-egg scenario that has to come to an end soon for this to reach critical mass. Does that concern you at all for the future of VR?
At AMD, we frankly aim to directly impact that adoption rate. That's why, for instance, we brought that $200 price point for the RX 480. That broke the paradigm – that is a VR-ready graphics card. You can have a wonderful VR experience, 90 frames per second, very low latency with that graphics card.
Now, that's just great. What do you have that experience with? We're certified with Oculus VR, [and] we're certified with HTC Vive.
So, the other element of course is the content development. That was a key point in our keynote today, was to actually bring partners in. We didn't want you to just hear from AMD – we wanted you to hear from our partners that they are truly enabled to develop content, and they're using the kind of tools we have open on GPUOpen.com. They're using image capture to create 360-degree images or post-processing that with our graphics cards. It's working today.
No, it's not the futuristic photorealism or true presence, that'll have to come down the road yet with further enhancements, but that great VR experience starts today.
You guys are clearly beating the drum of open standards and tackling it directly with initiatives like GPUOpen, but it seems to me that there is already a good deal of segmentation happening in the space, at least in how each current device is capable of delivering these experiences. There are key differences between, say, between the HTC Vive or Oculus Rift or even Gear VR in that they're very specific experiences that developers have to tailor their content to, to a point. First off, how much does that concern you, and is that even a solvable problem?
It's very important to us. It really gets to the heart of, "How can open standards or, at least, a readily accessible software library of developments become available.
One of the things we've done is that we're very active in the platforms. You know, Open VX for computer vision, Vulkan – OpenGL – Vulkan and DirectX. These open standards will evolve to cover VR.
In the meantime, the way that a number of these developers are accelerating development is that they're actually using game engines. So, the same game engine libraries that create those two-dimensional, flat-screen images that any gamer will tell you they spend more time than they care to admit on games, it's that same game engine software packages which are being re-deployed and modified to very quickly create VR content.
That's fundamental. If that piece was missing, and rightfully so, you would question our confidence on content creation for sure.
With that in mind then, how do you think we'll achieve that necessary critical mass moment in VR for it to become as ubiquitous as, say, a smartphone?
Again, the smartphone took off once the app store was there. So, I honestly think it'll be a straight analogy. Initially, there'll be limited content development. It's available, [and] it's starting now.
But, what happens is, once you start getting a couple of killer apps, and people understand – and, of course they'll learn the tricks that I described earlier. That's when it'll be deemed real.
And, what you heard from our keynote today is that there is many titles in development. Not all of them will become hits, but all it takes is a few of them that really capture the imagination of consumers. And, your friend has one, and, "Oh, my god, you've got to get this VR title."
It'll be like today. You can run PC gaming on various PCs, you'll be able to run this on various head-mounted displays. What we've done at AMD to make this easier is that we actually have an open-source package called LiquidVR – we hide that complexity.
We actually recognize which head-mounted display that you put on. You don't have to worry about it. If we detect it's a Samsung, HTC, Oculus VR, we can optimize for that.
Well, in that case, obviously early adopters are important, but how much so are they to this whole scheme, and how long do you expect this early adopter phase to last?
Well, I think it's going very quickly. What I think is you really see gaming and entertainment as where you're going to see those killer apps. They're all over.
By the way, for the consumer, that's readying the adopter. Because, the initial systems, although definitely, infinitely more affordable than they were just a few years ago, it's still more to buy a VR headset just today than the mobile that you need for just day-to-day personal needs.
But, it'll certainly be on the same cost point as any gamer is paying today. You could say, "Oh, it's limited to just gamers."
Well, you know as well I do, that's actually a huge market. There are more people than I think most of the public realizes that absolutely want to pay a little bit more for a phenomenal experience.
So, we're going to see those initial titles come out in entertainment and in gaming, and that's going to provide an economy of scale. And, because it's a whole new medium, people are going to see those types of software, hardware developed there, and doctors are going to say, "I can use this in medicine, I can train students here."
You're going to see retail folks say, "My god, now that I see that capability in those two verticals, I can build a VR showroom. I'm paying so much for bricks and mortar, I can take it to consumers this time."
We could spend an hour, and I could walk you through industry by industry that I think could be disrupted. But, it starts by an early adopter, and I believe that'll be gaming, entertainment and social media.
As soon as we get that first social media app up and running on VR, you know that's going to take off. And, I'm not the only one with that opinion – I'm sure that, if you asked Mark Zuckerberg in his purchase of Oculus Rift, that he made his view pretty clear there.
Yeah, he made his bets early. It seems to me then, that this is a particularly important year for Sony with its install base of, what, more than 40 million PS4 units? [And,] directly with your technology inside, it seems that quite a lot rides on the success of PlayStation VR to get that message out there.
Yeah, and Sony has stated that this fall with this regard. At E3, they pointed to this fall with more details.
But, we have a tremendous partnership with Sony, and there's millions of install base for this PlayStation 4 with AMD technology.
What about location-based VR [publicly accessible VR experience installations]? I'm curious as to how [that approach] solves the install base problem we have right now. Is location-based VR directly solving the install-base issue, or is it more about a cult of personality sort of thing?
I view it as a different way to experience it. So, what happens with location-based VR is – that I think about it like the iCafes in China. In China, you couldn't get that gaming experience. It was either not affordable or you couldn't get access to that whole gaming library, and iCafes provided that.
They were, in fact, fixed location-based gaming stations. So, think of that as an analogy to where people who may not have access to home-installed VR, but actually it's different than that.
Location-based VR offers yet a better experience. Why? Because it's a customization around the VR that can actually provide sensory feedback tailored to the game engines you can run on that location-based system. And so, that adds yet another whole dimension.
I'm actually very excited about how quickly location-based VR – here were are, just a nascent industry, and you're seeing these invasive investments in what I'll call a Stage Two improved experience capability over the initial VR systems out today.
The Wall Street Journal just reported that several amusement parks have invested in VR headset to put in their roller coasters. So, you actually ride the rollercoaster, but you wear a head-mounted display.
So, you're getting VR augmentation that is timed and synchronized with the ride.
What do you think those type of extra-sensory VR experiences look like in the home? Is it us getting into full-body suits with haptics. How do you see these play out as we move forward?
Well, there's a wide range of possibilities. That's a holy grail that people are on, this stuff – how to provide that haptic feedback.
And, the technology has progressed rapidly. It's not a body suit armor. What you'll see is that the sensors and the haptic feedback devices are, year after year, getting so much smaller that, actually, you're going to see clothing that has that haptic capability built-in.
So, if you were playing a game or experience a VR sensation, it might look like you have some unique clothing and a pair of thin gloves on, and we wouldn't look much out of place at all. But, yet you would be adding a very strong sensory perception to the VR experience.
This reminds me of something you said on stage, that we'll do whatever it takes to gain that next echelon of entertainment experience. But, do you think that there might be a wall there [with VR]? I ask because, right now, some of the headsets are just too uncomfortable to wear for longer than, say, 45 minutes or so. Say, with phones, it's just so simple, there's no friction there, whereas to me, even the idea of putting on a headset is a friction point.
I think it's going to move much more quickly than we think. The more visible head-mount display will give way to glasses, and you'll start seeing more and more mixed reality.
[With that,] you could run completely computer generated virtual reality or often times it will very really be mixed reality where you have an overlay of augmented reality with information you need that's useful or might be with entertainment where it creates an interactive nature.
Again, this is the start of the era toward what we consider is the immersive era. I highlight, certainly as a technology company at AMD, the visualization and compute advances.
The folks that are working on lenses will highlight what they're doing to, one, bring those advances to high pixelation, dense pixelation to glasses. And I can imagine down the road how that it's going to be figured out how we implement this into contact lenses.
Every piece of what I'll call the VR supply chain you're going to see innovation [in] at each turn.
So, I come back to this notion of "64X, 1,000 teraflops" of compute power needed for fully immersive VR. What do you think a device that's capable of that looks like? Or, is it just a matter of it getting easier and easier?
It's always just a matter of time. Every time that next, better experience will always start in a bigger form factor, that I consider the PC in a box. Then, you'll see it shrunk to the size of what looks like a memory stick, or double the size and you can put it in your pocket.
Eventually, even that little stick you put in your pocket shrinks to be embedded in a pair of glasses.
Obviously, some futurists even talk about embedding it in the human body. I'm not going to prognosticate for when it gets to that point.
But, I will tell you that I've had many years in this industry, and I note that every time we hit certain barriers some people say, "Well that's it. There's a reason why it can't go any further." Every time, those prognosticators are wrong.
Innovation at every turn continues to shrink the capability to provide that type of compute [and] that type of visualization.
The final factor I think of here is the transmission of data over the internet. Right now, at least in the US, we're even behind other nations in the speed at which information can be transmitted – the pure bandwidth we have access to. It seems like, to power what are going to be these incredible social VR experiences, we're going to need way more bandwidth than we're capable of. How do we get over that conundrum?
Well, there are two things. For one, people said, "Why can't you run cloud-based?" So, you're seeing way more prevalence of optics, the cost of optics coming down.
But for the virtual reality traits, how do we get that kind of bandwidth? Well, you're already seeing WiGig and other kinds of applications to dramatically increase bandwidth.
And, I'll tell you, just like you heard my comments on compute and visualization, and that we don't see an end in sight, there's a number of innovations in the lab that will be applying that same Moore's Law Plus of innovation to keep it on an exponential rate of improvement.
- Have a headset already? These are our picks for the best VR games
Article continues below