Forget GPUs — China unveils 2 ExaFLOPS supercomputer using only CPUs, packing 47,000 processors into 92 compute cabinets as it looks to supersede the US once again

National Supercomputing Center in Shenzen
(Image credit: National Supercomputing Center in Shenzen)

  • Lingsheng system targets two exaFLOPS using only central processing units
  • CPU-only architecture challenges GPU-dominated supercomputing industry standards
  • System design integrates high-bandwidth memory and high-speed interconnect networks

A Chinese supercomputing center has announced plans for a machine that would reach two exaFLOPS using nothing but central processors.

The Lingsheng system, unveiled at an April 2026 conference in Shenzhen, would pack 47,000 processors into just 92 compute cabinets.

Lu Yutong, director of the National Supercomputing Center in Shenzhen and the system's chief designer, explained that the hardware and software stack is "fully independently controllable."

Article continues below

A fundamentally different architectural strategy

The exascale machines in the industry at the moment heavily rely on GPU accelerators or specialized hardware.

This makes the CPU-only approach a major departure from established global trends.

The system leverages domestically produced high-performance CPUs alongside on-chip high-bandwidth memory and high-speed interconnect networks.

It also incorporates 3D floating orthogonal computing and full liquid cooling to manage thermal output.

According to the announcement, the Lingsheng platform achieves breakthroughs in six major technical areas: architecture, performance, energy consumption, programming, scalability, and reliability.

The system supports exascale computing power with exascale storage and petascale communication, and employs what officials described as the world's largest-scale centralized liquid cooling technology.

A pilot verification phase uses 100 Huawei Kunpeng servers built on Arm-based Taishan cores, totaling 12,800 cores.

When scaled to full production, the same system design would incorporate 1,580 blade servers using x86 CPUs with 101,120 cores and a theoretical peak above 10 petaflops.

The complete infrastructure also features 36 network cabinets supporting a million-port interconnect.

It will also feature 650PB of planned storage distributed across 428 nodes and 67 liquid-cooled storage cabinets that deliver 10TB/s of bandwidth.

The current fastest computer in the world, the U.S. Department of Energy's El Capitan, runs on 44,544 AMD MI300A APUs, integrating CPU and GPU silicon on a single package.

If Lingsheng’s sustained performance of 2 exaFLOPS is achieved, it will surpass El Capitan's measured Linpack score of 1.809 exaFLOPS.

On the flip side, the 2 exaFLOPS figure for the Lingsheng system is a theoretical number, but El Capitan already has a theoretical value of 2.79 exaFLOPS.

Therefore, the assertion of surpassing the fastest computer in the world does not appear attainable when comparing theoretical values against each other.

Unanswered questions and unproven capabilities

Several critical questions remain unanswered regarding the Lingsheng system, primarily because no benchmark data exists for the machine.

Although China asserts that this system will rely on no non-Chinese vendors, the country's domestic x86 options remain limited to Zhaoxin and Hygon.

None of these domestic alternatives has demonstrated processors that can compete with current-generation parts from either Intel or AMD.

The announcement also failed to name specific suppliers for the production system and provided no operational timeline for its completion.

On the potential application side, the technology spans nine fields, including remote sensing, materials science, bioinformatics, meteorology, pharmaceuticals, oil exploration, artificial intelligence, life sciences, and electromagnetic simulation.

One research team reported achieving parallel scalability of 81% for first-principles calculations involving 100 million atoms.

Another group claimed that virtual screening of compounds on a trillion-scale could improve efficiency by 1,000 times through a combination of AI and reinforcement learning.

However, these remain theoretical claims until a functioning machine produces verifiable benchmark results.

Via Tom's Hardware


Google logo on a black background next to text reading 'Click to follow TechRadar'

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.


Efosa Udinmwen
Freelance Journalist

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.