Broadcom's next-generation PCIe switches will support AMD's socket-to-socket Infinity Fabric technology (also known as xGMI) – the company's standard for boosting data transfer speeds between CPUs in a system.
The Infinity Fabric interconnect, which is normally used in EPYC servers, can handle package-to-package connectivity, and behave as PCIe Gen5 for cards, as well as CXL. Now, with Broadcom supporting the standard, the technology will make its way into its PCIe switches. But the real secret weapon here is Ethernet, according to Serve the Home.
AMD has thrown its support behind the as-of-yet-unformed Ultra Ethernet Consortium (UEC) to use this 50-year-old connectivity technology as the key interconnect between AI clusters, rather than Infiniband, which has been used to date. Infiniband has always been used in most high-performance computing (HPC) situations – while Ethernet was adopted in a more mainstream way. But, of late, Ethernet has grown into a technology that can handle the high-speed transfer of data in the age of data-intensive workloads and AI.
How does Frore's cooling system work?
Nvidia's major chips, including the A100 and H100 graphics cards, use its proprietary internal NVSwitches to interconnect chips within a chassis, and then an external link.
NVLink, which is a multi-lane near-range link that rivals PCIe, can allow a device to handle multiple links at the same time in a mesh networking system that's orchestrated with a central hub. With AMD now throwing its weight behind the UEC, it hopes it can raise its game by relying on cross-industry partnerships to resolve some of the technology's challenges over which Nvidia's NVLink has an advantage.
AMD recently launched its powerful Instinct MI300 accelerator, for instance – but when it comes to real-world deployment, scaling the use of these chips, and enabling fast communication between them, is just as important as sheer performance. Nvidia's NVLink, for instance, can allow its deployments to scale extremely well on a huge scale, which is something AMD will be looking to rectify with its latest developments.
More from TechRadar Pro
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Keumars Afifi-Sabet is the Technology Editor for Live Science. He has written for a variety of publications including ITPro, The Week Digital and ComputerActive. He has worked as a technology journalist for more than five years, having previously held the role of features editor with ITPro. In his previous role, he oversaw the commissioning and publishing of long form in areas including AI, cyber security, cloud computing and digital transformation.
This tiny component could help Google and others save tens of millions of dollars — new modules help improve power efficiency in AI-driven data centers
Meta looking to use exotic, custom CPU in its datacenters for machine learning and AI — yet another indication that Nvidia's most formidable rivals are developing their own alternatives to its hardware