Last week, Nvidia made an announcement that shook the industry as for the first time ever, it swept aside its decades-old rivalry with AMD, selecting the EPYC server processor for its DGX A100 deep learning system and casting aside Intel’s Xeon.
In a statement to CRN, Charlie Boyle, Vice President and General Manager of DGX Systems at Nvidia, explained the rationale behind the switch.
"To keep the GPUs in our system supplied with data, we needed a fast CPU with as many cores and PCI lanes as possible. The AMD CPUs we use have 64 cores each, lots of PCI lanes, and support PCIe Gen4," he said.
- Here's our choice of the best SMB servers of 2020
- We've built a list of the best business computers on the market
- Check out our list of the best workstations available
Intel is expected to add PCIe 4.0 to its feature list when it launches the 10nm Ice Lake server chip later this year but, for now, can only sit and watch as AMD nibbles away at its market share. EPYC also supports eight-channel memory, two more than Intel’s Xeon Scalable processors.
The EPYC 7742 delivers more cores (64 vs 56 with the Intel Xeon Platinum 9282) with significantly more cache onboard (256MB vs 77MB), a lower TDP (225W vs 400w) and a far lower price tag ($6,950 vs circa $25,000).
These marked improvements are all thanks to AMD’s much finer 7nm manufacturing process, which allows far more transistors to be packed together, optimising power consumption and clock speeds.
Time will tell whether the move marks a permanent thawing of the relationship between Nvidia and AMD, or just a temporary truce.
- Here's our list of the best dedicated server hosting on the market