Forget about AI GPU scarcity: data center operators may have to wait a whopping three years to get key component — and this may be killing off competition by driving smaller operators out

Amazon
(Image credit: Amazon)

The GenAI GPU scarcity has already sparked a surge in demand, increased costs, and reduced availability. However, another pressing issue is looming: data centers are running out of space and power. This is particularly problematic for small companies providing high-performance computing (HPC) colocation services, who are finding current data centers maxed out. 

A recent report from JLL, a real estate investment and management firm, highlights the AI-driven growth is expected to continue, with data generation predicted to double over the next five years. 

Furthermore, data center storage capacity is projected to grow from 10.1 zettabytes now to 21.0 zettabytes in 2027, necessitating more data centers. The power demands of generative AI, estimated at 300 to 500+ megawatts per campus, will also necessitate more energy-efficient designs and locations.


NVIDIA Tesla M40 24GB Module: $240 at Amazon

NVIDIA Tesla M40 24GB Module: $240 at Amazon

The NVIDIA Tesla M40 GPU Accelerator is the world's fastest accelerator for deep learning training. It provides accurate speech recognition, deep understanding in video and natural language content and better detection of anomalies in medical images.

Power grids are reaching capacity

According to the report, the design of AI-specialized data centers differs significantly from conventional facilities, requiring operators to plan, design, and allocate power resources based on the type of data processed or stage of GenAI development. With the huge increase in GPUs, existing standards for heat removal will be surpassed, prompting a shift from traditional air-based cooling methods to liquid cooling and rear-door heat exchangers. 

Speaking to HPCwire, Andy Cvengros, managing director of U.S. Data Center Markets for JLL, emphasized the importance of planning. He explained that power grids are reaching capacity, and transformers have lead times exceeding three years, necessitating innovation. The GPU squeeze is affecting small colocation deployments of 4-5 racks, who are finding it increasingly difficult to secure data center space due to the demands of hyperscalers. 

Cvengros also highlighted that all major metro areas are essentially maxed out, making secondary areas like Reno, NV, or Columbus, OH, prime locations for new data center construction. However, the demand is expected to continue, and new data centers are 3.5 years out. 

The global GenAI energy demand presents both opportunities and challenges. Finding GPUs for HPC is only half the problem; as HPCwire points out, where to plug them in may become a bigger challenge. This issue is particularly challenging for smaller operators, who may be driven out of the market by the competition for resources.

More from TechRadar Pro

Wayne Williams
Editor

Wayne Williams is a freelancer writing news for TechRadar Pro. He has been writing about computers, technology, and the web for 30 years. In that time he wrote for most of the UK’s PC magazines, and launched, edited and published a number of them too.