RAMageddon: How IT leaders are adapting PC refresh strategy to manage the 2026 memory crunch

Hacking red and blue digital binary code matrix 01 background.
(Image credit: Quardia via Shutterstock)

Enterprise IT teams have long treated hardware refresh cycles as a predictable routine. Devices were replaced on schedule, budgets were mapped out years in advance, and procurement teams had a fairly good idea what the next round of devices would cost.

That predictability is gone. HP recently revealed that RAM now accounts for roughly 35% of a PC's build cost, up from around 15–18% just a few months ago.

Analysts have also warned that PC prices could rise by 15–20% during the second half of 2026 if manufacturers continue passing rising component costs driven by demand for AI systems on to buyers.

Article continues below
Dan Salinas

Chief Operating Officer at Lakeside Software.

IT leaders tell us volatility is already hitting planning hard. Refresh quotes are arriving 30–60% higher than anticipated, and suppliers are compressing price validity windows — sometimes to just a few hours.

During this “RAMageddon” period, here are five ways enterprises are adjusting their approach to hardware:

1. Moving from time-based to usage-based refresh decisions

For years, the default rule was simple: replace devices every three to five years. It kept refresh cycles predictable, even if many of those machines were still performing well. Now, instead of focusing purely on purchase dates, organizations are beginning to examine how devices actually behave during normal work.

Making that shift usually requires more than just an IT decision. Refresh policies often sit at the intersection of endpoint teams, procurement, and finance, all of whom have different priorities. Security teams may also have requirements around device age or operating system support.

Moving to a usage-based approach means those groups need to agree on new criteria for when devices should be replaced and when they can safely remain in service.

The starting point is device data, and getting it doesn't have to be complex. Platforms that continuously collect CPU activity, memory demand, and application usage across an entire fleet, provide IT teams with an accurate view of pressure points.

Take one New York bank we worked with. It had been planning to replace roughly 7,000 laptops each year as part of its normal refresh cycle. After analyzing workload patterns and device stress, the number dropped to around 600 machines that genuinely needed replacing.

2. Right-size devices according to real workload requirements

In one enterprise device estate analysis involving 5,000 laptops originally scheduled for refresh, usage data showed around 1,400 users could move to lower-cost machines without affecting their work. Adjusting the hardware mix revealed close to $1 million in potential savings without replacing the entire fleet.

Examples like this tend to surface quickly once organizations begin examining real workload patterns. Device fleets rarely stay balanced for long. Over time, companies accumulate a mix of machines that are either far more powerful than necessary or struggling to keep up.

It is not unusual to find someone answering emails on a high-spec laptop while another employee tries to run demanding software on a much weaker machine. Once IT teams look at actual workload patterns, those imbalances become obvious.

Workload analysis can also reveal that performance problems are sometimes caused by inefficient applications or background processes rather than hardware limitations, while the same visibility often highlights unused or rarely used software that organizations can remove to reduce unnecessary license costs.

3. Extend device lifecycles safely using performance data

When teams start digging into performance data, another pattern often appears. Many laptops continue handling everyday work long after the traditional refresh deadline has passed. The challenge is identifying which machines still have room to spare and which ones are starting to struggle.

Ongoing end-user monitoring helps IT teams make that call with much more confidence. Devices that continue to run comfortably can remain in service, while the smaller group showing signs of strain can be prioritized for replacement.

In many organizations, the same data is also used to understand the wider digital employee experience, highlighting performance issues that may not be obvious from hardware specifications alone.

That visibility also helps IT teams spot small problems before they turn into support tickets, reducing interruptions for employees and limiting the need for reactive troubleshooting.

A Forrester study examining a financial organization with 40,000 devices found annual replacement rates dropping from 25% to 23% by extending the life of roughly 40% of the fleet from four to five years. Over three years that translated into around $2 million in avoided hardware costs.

4. Reduce dependency on high-spec endpoint hardware

Another way organizations are managing hardware pressure is by reconsidering where computing workloads actually take place.

Virtual desktop infrastructure and desktop-as-a-service platforms allow applications to run on centralized infrastructure rather than on the local device. In practical terms, that means organizations do not need to buy powerful laptops for everyone.

Of course, this model does not suit every role. Engineers, designers, and developers often require powerful local machines to run specialized software. However, many office-based employees rely mainly on web applications and productivity tools that run perfectly well in virtual environments.

In one investment management organization, device performance data was analyzed before a virtual desktop rollout to understand which workloads genuinely required local processing power.

The analysis helped identify employees who could move to virtual desktops without affecting productivity, allowing the organization to extend the lifespan of many endpoint devices.

5. Use device intelligence to improve forecasting and procurement stability

Usage data also changes how procurement planning works. Instead of replacing large numbers of devices at the same time, organizations can spread upgrades more gradually and align them with real demand.

That flexibility matters when prices are moving around as much as they are today, because it reduces the risk of committing to large hardware purchases at the worst possible moment.

The same approach is useful when introducing newer technologies such as AI PCs. Not every employee needs the additional processing power or memory. Developers or data specialists may benefit from it, while employees who primarily work with email, collaboration tools, and documents are unlikely to notice much difference.

Many organizations are therefore starting with small pilot groups to understand where those capabilities actually deliver value before committing to a broader rollout.

The nickname “RAMageddon” started as a bit of gallows humor among IT teams, but it captures the mood in many IT departments right now. With memory prices moving this quickly, refresh planning has become far less predictable and requires a much more hands-on approach based on how devices actually perform in everyday use.

We've featured the best all-in-one computer.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

TOPICS
Chief Operating Officer at Lakeside Software

Dan Salinas is Chief Operating Officer at Lakeside Software

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.