Nvidia chose AMD over Intel for its most powerful product yet - here’s why

Nvidia DGX A100
(Image credit: Nvidia)
Audio player loading…

Last week, Nvidia made an announcement that shook the industry as for the first time ever, it swept aside its decades-old rivalry with AMD, selecting the EPYC server processor for its DGX A100 deep learning system and casting aside Intel’s Xeon.

In a statement to CRN (opens in new tab), Charlie Boyle, Vice President and General Manager of DGX Systems at Nvidia, explained the rationale behind the switch.

"To keep the GPUs in our system supplied with data, we needed a fast CPU with as many cores and PCI lanes as possible. The AMD CPUs we use have 64 cores each, lots of PCI lanes, and support PCIe Gen4," he said.

Intel is expected to add PCIe 4.0 to its feature list when it launches the 10nm Ice Lake server chip later this year but, for now, can only sit and watch as AMD nibbles away at its market share. EPYC also supports eight-channel memory, two more than Intel’s Xeon Scalable processors.

The EPYC 7742 delivers more cores (64 vs 56 with the Intel Xeon Platinum 9282) with significantly more cache onboard (256MB vs 77MB), a lower TDP (225W vs 400w) and a far lower price tag ($6,950 vs circa $25,000).

These marked improvements are all thanks to AMD’s much finer 7nm manufacturing process, which allows far more transistors to be packed together, optimising power consumption and clock speeds.

Time will tell whether the move marks a permanent thawing of the relationship between Nvidia and AMD, or just a temporary truce.

Desire Athow
Managing Editor, TechRadar Pro

Désiré has been musing and writing about technology during a career spanning four decades. He dabbled in website builders and web hosting when DHTML and frames were in vogue and started narrating about the impact of technology on society just before the start of the Y2K hysteria at the turn of the last millennium.