Microsoft, Google, and Meta have borrowed EV tech for the next big thing in data centers: 1MW watercooled racks

+/-400VDC power delivery: AC-to-DC sidecar power rack
+/-400VDC power delivery: AC-to-DC sidecar power rack (Image credit: Storage Review)

  • Liquid cooling isn't optional anymore, it's the only way to survive AI's thermal onslaught
  • The jump to 400VDC borrows heavily from electric vehicle supply chains and design logic
  • Google’s TPU supercomputers now run at gigawatt scale with 99.999% uptime

As demand for artificial intelligence workloads intensifies, the physical infrastructure of data centers is undergoing rapid and radical transformation.

The likes of Google, Microsoft, and Meta are now drawing on technologies initially developed for electric vehicles (EVs), particularly 400VDC systems, to address the dual challenges of high-density power delivery and thermal management.

The emerging vision is of data center racks capable of delivering up to 1 megawatt of power, paired with liquid cooling systems engineered to manage the resulting heat.

Borrowing EV technology for data center evolution

The shift to 400VDC power distribution marks a decisive break from legacy systems. Google previously championed the industry's move from 12VDC to 48VDC, but the current transition to +/-400VDC is being enabled by EV supply chains and propelled by necessity.

The Mt. Diablo initiative, supported by Meta, Microsoft, and the Open Compute Project (OCP), aims to standardize interfaces at this voltage level.

Google says this architecture is a pragmatic move that frees up valuable rack space for compute resources by decoupling power delivery from IT racks via AC-to-DC sidecar units. It also improves efficiency by approximately 3%.

Cooling, however, has become an equally pressing issue. With next-generation chips consuming upwards of 1,000 watts each, traditional air cooling is rapidly becoming obsolete.

Liquid cooling has emerged as the only scalable solution for managing heat in high-density compute environments.

Google has embraced this approach with full-scale deployments; its liquid-cooled TPU pods now operate at gigawatt scale and have delivered 99.999% uptime over the past seven years.

These systems have replaced large heatsinks with compact cold plates, effectively halving the physical footprint of server hardware and quadrupling compute density compared to previous generations.

Yet, despite these technical achievements, skepticism is warranted. The push toward 1MW racks is based on the assumption of continuously rising demand, a trend that may not materialize as expected.

While Google's roadmap highlights AI's growing power needs - projecting more than 500 kW per rack by 2030 - it remains uncertain whether these projections will hold across the broader market.

It’s also worth noting that the integration of EV-related technologies into data centers brings not only efficiency gains but also new complexities, particularly concerning safety and serviceability at high voltages.

Nonetheless, the collaboration between hyperscalers and the open hardware community signals a shared recognition that existing paradigms are no longer sufficient.

Via Storagereview

You might also like

TOPICS
Efosa Udinmwen
Freelance Journalist

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking. Efosa developed a keen interest in technology policy, specifically exploring the intersection of privacy, security, and politics. His research delves into how technological advancements influence regulatory frameworks and societal norms, particularly concerning data protection and cybersecurity. Upon joining TechRadar Pro, in addition to privacy and technology policy, he is also focused on B2B security products. Efosa can be contacted at this email: udinmwenefosa@gmail.com

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.