To beat Nvidia GeForce RTX 2080 Ti, here’s how much Intel’s GPU needs to improve

Intel GPU

While Nvidia just released its RTX 20 series of graphics cards, all but trouncing AMD’s latest and most powerful cards, it may have another challenger to consider before this series is through: Intel. The firm recently teased its launch of discrete graphics cards for 2020.

This will mark the first time in more than 20 years that Nvidia will have competition other than AMD in the graphics card market. However, judging by where Intel’s best integrated graphics – graphics cores embedded into the CPU platform – are right now, the company has quite a bit of catching up to do, especially if it plans to expand upon its current GPU to power these.

So, let’s put into perspective just how far Intel needs to come from its integrated graphics platform to meet or exceed what Nvidia’s best is capable of – on paper. For this theoretical (and highly speculative) experiment, assuming Intel builds upon its existing technology, we’re looking at three key performance metrics in the Intel UHD Graphics 630 graphics and how they compare to the flagship Nvidia GeForce RTX 2080 Ti.

That way, should Intel’s discrete graphics card launch beat Nvidia to the punch in updating its hardware come 2020, we’ll see the bar Intel must raise by then if it wants to be competitive.

Nvidia GeForce RTX 2080 Ti

Intel's most daunting rival: the Nvidia GeForce RTX 2080 Ti.

Intel’s got a lot of work to do

Right off the bat, we know that the graphics processing unit (GPU) inside Nvidia’s RTX 2080 Ti operates at a base clock speed of 1,350 MHz, and can be boosted up to 1,635 MHz using basic tools. Intel’s UHD Graphics 630 GPU runs at a base speed of just 350 MHz, though it can be boosted to up to 1,200 MHz using the dynamic frequency feature.

Here, Intel needs to improve its GPU’s base clock speed by roughly 285%, or by 3.8 times, to simply meet the RTX 2080 Ti on its own terms. That said, the Intel GPU’s boosted speed isn’t terribly far off as is – just 36% shy.

As for how quickly the RTX 2080 Ti can simultaneously process, this GPU’s memory bandwidth is 616 gigabytes per second (GB/s). Intel’s GPU currently has to share memory bandwidth with the processor itself, at 41.6 GB/s. Intel’s looking at a whopping 1,380% – or 14 times – increase in memory bandwidth for its GPU if it wants to compete.

That said, Intel’s GPU can currently use up to 64GB of general-use memory for the purposes of storing graphic textures, whereas the RTX 2080 is stuck with the card’s onboard 11GB of video memory. However, Nvidia’s graphics memory is much faster than any standard DDR4 memory out there, at 14 gigabits per second (Gbps) compared to DDR4 coming in at around 3.2 to 4.2 Gbps. Here, Intel’s GPU will need to up its memory speed by 233%, or roughly three times.

Obviously, Intel will not employ general use memory in whatever graphics card it creates, but rather GDDR6 or HBM2 memory, used by Nvidia and AMD, respectively. Or, Intel could surprise us all with an interesting application of its memory-like Intel Optane storage technology. Either way, Intel’s GPU is going to need much more capacious and speedy memory attached than Intel silicon is used to.

Again, this analysis is highly speculative and assumes that Intel will simply expand its existing UHD Graphics platform as the GPU powering its 2020 graphics cards. It also assumes that Nvidia will keep to its own two-year cadence for generational leaps.

Regardless, this is just how sharply Intel’s GPU technology needs to improve by 2020 to properly challenge Nvidia’s current best work. Time to get cracking, Blue Team.

Joe Osborne

Joe Osborne is the Senior Technology Editor at Insider Inc. His role is to leads the technology coverage team for the Business Insider Shopping team, facilitating expert reviews, comprehensive buying guides, snap deals news and more. Previously, Joe was TechRadar's US computing editor, leading reviews of everything from gaming PCs to internal components and accessories. In his spare time, Joe is a renowned Dungeons and Dragons dungeon master – and arguably the nicest man in tech.