Nvidia's closest rival once again obliterates cloud giants in AI performance; Cerebras Inference is 75x faster than AWS, 32x faster than Google on Llama 3.1 405B

Cerebras WSE-3
(Image credit: Cerebras)

  • Cerebras hits 969 tokens/second on Llama 3.1 405B, 75x faster than AWS
  • Claims industry-low 240ms latency, twice as fast as Google Vertex
  • Cerebras Inference runs on the CS-3 with the WSE-3 AI processor

Cerebras Systems says it has set a new benchmark in AI performance with Meta’s Llama 3.1 405B model, achieving an unprecedented generation speed of 969 tokens per second.

Third-party benchmark firm Artificial Analysis has claimed this performance is up to 75 times faster than GPU-based offerings from major hyperscalers. It was nearly six times faster than SambaNova at 164 tokens per second, more than 14 times faster than Google Vertex at 30 tokens per second, and far surpassing Azure at just 20 tokens per second and AWS at 13 tokens per second.

Additionally, the system demonstrated the fastest time to first token in the world, clocking in at just 240 milliseconds - nearly twice as fast as Google Vertex at 430 milliseconds and far ahead of AWS at 1,770 milliseconds.

Extending its lead

“Cerebras holds the world record in Llama 3.1 8B and 70B performance, and with this announcement, we’re extending our lead to Llama 3.1 405B - delivering 969 tokens per second," noted Andrew Feldman, co-founder and CEO of Cerebras.

"By running the largest models at instant speed, Cerebras enables real-time responses from the world’s leading open frontier model. This opens up powerful new use cases, including reasoning and multi-agent collaboration, across the AI landscape.”

The Cerebras Inference system, powered by the CS-3 supercomputer and its Wafer Scale Engine 3 (WSE-3), supports full 128K context length at 16-bit precision. The WSE-3, known as the “fastest AI chip in the world,” features 44GB on-chip SRAM, four trillion transistors, and 900,000 AI-optimized cores. It delivers a peak AI performance of 125 petaflops and boasts 7,000 times the memory bandwidth of the Nvidia H100.

Meta’s GenAI VP Ahmad Al-Dahle also praised Cerebras' latest results, saying, “Scaling inference is critical for accelerating AI and open source innovation. Thanks to the incredible work of the Cerebras team, Llama 3.1 405B is now the world’s fastest frontier model. Through the power of Llama and our open approach, super-fast and affordable inference is now in reach for more developers than ever before.”

Customer trials for the system are ongoing, with general availability slated for Q1 2025. Pricing begins at $6 per million input tokens and $12 per million output tokens.

Cerebras tokens per second on Llama 3.1 405B

(Image credit: Cerebras)

seconds to first token received on Llama 3.1 405B

(Image credit: Cerebras)

You might also like

Wayne Williams
Editor

Wayne Williams is a freelancer writing news for TechRadar Pro. He has been writing about computers, technology, and the web for 30 years. In that time he wrote for most of the UK’s PC magazines, and launched, edited and published a number of them too.

Read more
SambaNova runs DeepSeek
Nvidia rival claims DeepSeek world record as it delivers industry-first performance with 95% fewer chips
Cerebras WSE-3
DeepSeek on steroids: Cerebras embraces controversial Chinese ChatGPT rival and promises 57x faster inference speeds
Half man, half AI.
Yet another tech startup wants to topple Nvidia with 'orders of magnitude' better energy efficiency; Sagence AI bets on analog in-memory compute to deliver 666K tokens/s on Llama2-70B
Nvidia H800 GPU
A look at the unbelievable Nvidia GPU that powers DeepSeek's AI global ambition
d-Matrix Corsair card
Tech startup proposes a novel way to tackle massive LLMs using the fastest memory available to mankind
A Corsair One i500 on a desk
Microsoft backed a tiny hardware startup that just launched its first AI processor that does inference without GPU or expensive HBM memory and a key Nvidia partner is collaborating with it
Latest in Pro
The socket interface of the Intel Core Ultra processor
Intel unveils its most powerful AI PCs yet - new Intel Core Ultra Series 2 processors pack in vPro for lightweight laptops and high-performance workstations alike
Webex by Cisco banner on a Chromebook
Cisco warns some Webex users of worrying security flaw, so patch now
Microsoft UK CEO Darren Hardman AI Tour London 2025
Microsoft - UK can help drive the global AI future, but only with the proper buy-in
Red padlock open on electric circuits network dark red background
AI-powered cyber threats are becoming the biggest worry for businesses everywhere
Woman using iMessage on iPhone
Apple to take legal action against British Government over backdoor request
AOC Graphic Pro U32U3CV during our review
I reviewed the AOC Graphic Pro U32U3CV and it's a staggeringly pro-grade monitor for the price
Latest in News
A hand holding a phone showing the Android Find My Device network
Android's Find My Device can now let you track your friends – and I can't decide if that's cool or creepy
Insta360 X4 360 degree camera without lens protector
Leaked DJI Osmo 360 image suggests GoPro and Insta360 should be worried – here's why
A YouTube Premium promo on a laptop screen
A cheaper YouTube Premium Lite plan just rolled out in the US – but you’ll miss out on these 4 features
Viaim RecDot AI true wireless earbuds
These AI-powered earbuds can also act as a dictaphone with transcription when left in their case
The socket interface of the Intel Core Ultra processor
Intel unveils its most powerful AI PCs yet - new Intel Core Ultra Series 2 processors pack in vPro for lightweight laptops and high-performance workstations alike
An Nvidia GeForce RTX 5070
Nvidia confirms that an RTX 5070 Founders Edition is coming... just not on launch day