Is Nvidia opening up its NVLink doors even further? New partnership with AMD will see greater integration across many kinds of chips
Arm CPUs can compete directly with Nvidia and Intel server processors
- Arm-based Neoverse CPUs can now communicate directly with Nvidia GPUs efficiently
- NVLink Fusion eliminates PCIe bottlenecks for AI-focused server deployments
- Hyperscalers such as Microsoft and Google can use custom Arm CPUs immediately
Nvidia has announced Arm-based Neoverse CPUs will now be able to integrate with its NVLink Fusion technology.
This integration allows Arm licensees to design processors capable of direct communication with Nvidia GPUs.
Previously, NVLink connections were primarily limited to Nvidia’s own CPUs or servers using Intel and AMD processors, so hyperscalers such as Microsoft, Amazon, and Google can now pair custom Arm CPUs with Nvidia GPUs in their Workstations and AI servers.
Expansion of NVLink beyond proprietary CPUs
The development also enables Arm-based chips to move data more efficiently compared to standard PCIe connections.
Arm confirmed its custom Neoverse designs will include a protocol that allows seamless data transfer with Nvidia GPUs.
Arm licensees can build CPU SoCs that connect natively to Nvidia accelerators by integrating NVLink IP directly.
Customers adopting these CPUs will be able to deploy systems where multiple GPUs are paired with a single CPU for AI workloads.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The announcement was made at the Supercomputing ’25 conference and reflects participation by both CPU and GPU developers.
Nvidia’s Grace Blackwell platform currently pairs multiple GPUs with an Arm-based CPU, while other server configurations rely on Intel or AMD CPUs.
Microsoft, Amazon, and Google are deploying Arm-based CPUs to gain more control over their infrastructure and reduce operational costs.
Arm itself does not manufacture CPUs but licenses its instruction set architecture and sells designs for faster development of Arm-based processors.
NVLink Fusion support in Arm chips allows these processors to work with Nvidia GPUs without requiring Nvidia CPUs.
The ecosystem also affects sovereign AI projects, where governments or cloud providers may want Arm CPUs for control-plane tasks.
NVLink allows these systems to use Nvidia GPUs while maintaining custom CPU configurations.
Softbank, which previously held shares in Nvidia, is backing OpenAI’s Stargate project, which plans to use both Arm and Nvidia chips.
NVLink Fusion integration, therefore, provides options for pairing Arm CPUs with market-leading GPU accelerators in multiple environments.
From a technical perspective, NVLink expansion increases the number of CPUs that can be used in Nvidia-centric AI systems.
It also allows future Arm-based designs to compete directly with Nvidia’s Grace and Vera processors, as well as Intel Xeon CPUs, in configurations where GPUs are the main computational units.
The development may reduce the appeal of alternative interconnects or competing AI accelerators, but chip development cycles could affect adoption timing.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.