If you’ve been following PC hardware news closely for the last couple of months, you’ve been treated to something akin to a wrestling pay-per-view playing out. If you haven’t: my goodness, you’re in for a treat.
The origins of the current situation began with the crypto boom. Nvidia watched in quiet amazement as Bitcoin’s value skyrocketed throughout the 2010s, and this led to an emerging crypto-mining industry. Crypto bros would buy large amounts of powerful graphics cards capable of vast throughput, assemble them into mining ‘farms’ – basically racks of gaming-spec PCs performing mining computational tasks 24/7 – and reap the very real financial rewards. This led to an unprecedented spike in used graphics card prices, and retailers found their stocks depleted almost immediately after a batch arrived with them.
This has been going on for years. Back in 2015, you could earn close to three dollars a day per terahash (opens in new tab). The profitability declined rapidly as the total number of Bitcoins yet to be ‘unearthed’ diminished and the total network size increased. And as early as 2018, major news outlets such as CNBC decried that crypto mining was no longer profitable (opens in new tab).
But the demand for graphics cards remained huge for long enough that vastly inflated new and used prices became normalized. That bit was crucial for what was to come for Nvidia’s RTX 4000-series strategy.
Then came the lockdowns. Factories across the world, including those where semiconductors are made, closed their doors for extended periods, and this led to an already stretched GPU supply chain snapping completely. So through 2020 and 2021, if you were in possession of an RTX 3000-series card at all, you had an extremely rare and sought-after commodity. And you know what that does to prices. As reported by PC Gamer (opens in new tab), the average selling price of a GPU at retailers rose 300% during that time, costing around $280-$350 during 2019 and leaping to over $1000 through 2021.
These price hikes came to a head in late 2022, when Nvidia announced the price of its RTX 40-series cards, with the new RTX 4090 priced at $1,599 and the RTX 4080 costing $1,199, with many people pointing out that they were way too expensive.
Nvidia’s rationale for these price points was that Moore’s law is dead, so you can’t just keep shrinking dies and cramming them with more computational power in a linear fashion to make the games ever-shinier. Instead, Nvidia president Jen-Hsun Huang described the process of obtaining more power as a ‘full-stack challenge’, in other words, a combination of software and architecture design. And that costs a lot of money. So in order to make the 4000-series cards a meaningful step ahead, they’re going to cost considerably more to buy.
Sceptics like myself pointed out that having a market that had normalized vastly inflated GPU prices for the last few years can’t have hurt that pricing strategy, either.
Then something odd happened. The 4080 was due to arrive in two variants – a 16GB card and a cheaper 12GB version. But that never happened. Nvidia ‘un-launched’ the 12GB version. The company framed it as a simple branding problem. “[It’s] a fantastic graphics card, but it’s not named right,” Nvidia claimed in its announcement. “Having two GPUs with the 4080 designation is confusing.”
But gamers saw things differently. This wasn’t a 12GB version of the same architecture that the 16GB 4080 was based around. It was a completely different design with a lower-spec AD104 processor compared to the 16GB 4080’s AD103. The VRAM was slower too – slower than the 3080 from the previous gen that it was replacing, at a much higher price. This was not a good look.
But it was all alright, because the RTX 4070 Ti was coming out next, and this would be the ‘sweet spot’ card, the model that would solve all the problems. The price would be attainable and the performance would be three times higher than the previous generation 30-series cards.
And then it turned out that the RTX 4070 Ti was the new name for that 4080 12GB card. Albeit one with a lower price point, at a $799 MSRP.
But since Nvidia wouldn’t be putting out any first-party, Founders Edition cards, the 4070 Ti would only be available from third parties, and they didn’t have to sell it for $799. In other words, that price point was pretty notional and not reflective of how much you may actually pay for one.
The price wasn’t the problem, though. It was the performance. This was the exact same card that gamers grumbled about when it was called the 4080 12GB, the one whose VRAM was slower than its older, slower predecessor. Whacking a '7' and a couple of extra letters in there doesn’t solve that.
The empire strikes back
And then there were the manufacturer claims about performance being three times that of the previous-gen cards. That turned out to be about DLSS performance, not native rendering.
DLSS tech is smart and ever-improving, but it’s got a long way to go before it becomes indiscernible from a natively rendered image. It’s what you turn on in a last-ditch effort to hit 60fps, not what you pay $799+ for. Native rendering performance turned out to be about 20% better than the previous gen card in most tester’s benchmarks, which isn’t quite the same value proposition for a gamer being asked to pay for a more expensive card. As Barrons points out (opens in new tab), the 30-series card was 70% faster than its predecessor natively and was brought to market at the same price point.
What Barrons also observes is that nobody’s buying the 4070 Ti, which was released in January 2023. The marketing about-turns and vagaries about performance have made it into something of a poisoned chalice, leaving retailers with plenty of stock – an anomaly for a GPU market where ‘notify me’ has become the new ‘add to basket’.
While the top-end RTX 4090 sales have reportedly been strong, both the 4070 Ti and the 4080 have failed to conjure up interest. Those slow sales have coincided with poor stock market performance for Nvidia, losing nearly half of its market capitalization (opens in new tab) since 2021. Sony’s been able to ramp up the PS5 production once semiconductor factories resumed normal routine, and that’s given gamers a much more wallet-friendly option for gaming than Nvidia’s much higher price of entry for PC gaming.
Moore’s law may have been dead for years now, but what this saga shows us is that consumers aren’t prepared to pay for its funeral. We want innovation just as much as we ever did – but we can tell when we’re being gouged, too.