How can we create a sustainable AI future?
One of the major challenges for AI is its energy consumption

With innovation comes impact. The social media revolution changed how we share content, how we buy, sell and learn, but also raised questions around technology misuse, censorship and protection. Every time we take a step forward, we also need to tackle challenges, and AI is no different.
One of the major challenges for AI is its energy consumption. Together, datacenters and AI currently use between 1-2% of the world’s electricity, but this figure is rising fast.
To complicate matters, these estimates change as our AI technologies and usage patterns evolve. In 2022, datacenters including AI and cryptocurrencies platforms used around 460 TWh of power. In early 2024, it was projected they could use up to an additional 900 TWh by 2030. In early 2025, this figure was radically revised downwards to approximately 500 TWh, largely because of more efficient AI models and datacentre technologies. Furthermore, to put this in context, demand from the electric vehicle industry will likely reach 854TWh by 2030, with domestic and industrial heating sitting at around 486TWh.
However, this growth is still significant, and everyone – providers and users alike – has a duty to make sure their use of AI tools is as efficient as possible.
Global Environment Director, OVHcloud.
How is AI infrastructure getting more power-efficient?
Whether it’s Moore’s law telling us we’ll see more transistors on the same chip, or Koomey’s law telling us we’ll see more computations per joule of energy used, computing has always become more efficient over time and the GPUs, the AI “engines”, will certainly follow that trend.
When we look back between 2010 and 2018, the amount of datacenter compute being done increased by 550%, but energy use increased by only 6%. We are already seeing this kind of improvement in AI workloads, and we have many reasons to be a bit more optimistic about the future.
We are also seeing a rise in the adoption of liquid cooling technologies. According to Markets and Markets, the market for liquid cooling in datacenters will grow almost tenfold in the next seven years. Water has a thermal conductivity far greater than air, making liquid cooling techniques more power-efficient (and therefore cheaper) than air cooling. This is ideal for AI workloads, which tend to consume more power and run hotter than non-AI workloads. Water cooling dramatically increases the power usage effectiveness of datacenters.
Furthermore, we also see significant innovation in the liquid cooling field itself. Historically, datacenters have used direct liquid to chip cooling (DLTC) where cooling plates sit on CPUs or GPUs. As power (and consequently heat) loads rise, we are seeing more immersion cooling, where the entire server is immersed in a non-conductive liquid and all components can be cooled simultaneously.
This format can even be combined with DLTC cooling, ensuring that server components which usually ‘run hot’ (like the CPU and GPU) receive greater cooling power, while the rest of the server is cooled by the surrounding fluid.
How can we make AI more resource-efficient?
Alongside power, we usually consider water as a resource in its own right. Consider a standard internet search. An AI-powered search uses around 25ml of water, where a non-AI-powered search will use 50 times less: half a milliliter. On an industrial scale, a recent test case run by the National Renewable Energy Laboratory found smart water cooling reduced water consumption by around 56%; in their case, over a million liters of water a year.
It’s also important to think about the minerals that our infrastructure uses, because these don’t exist in isolation. Re-using components where possible, or recycling them when it’s not, can be an enormously efficient way to both avoid unnecessary purchases and reduce the environmental impact of AI.
As an example, consider lithium, a key component in electric cars. Lithium can require up to half a million liters of water and generate fifteen tonnes of CO2 for one tonne of metal. At the same time, there’s a geopolitical element to our resource usage: around a third of our nickel, which is used in heatsinks, used to come from Russia.
In many cases, it’s even possible to recover certain metals. For example, using pyrolysis, you can obtain “black” copper from complex components. Then, via electrolysis, separate the elements to recover pure copper, nickel, iron, palladium, titanium, silver and gold, turning e-waste into valuable assets. Although this will not be considerable revenue stream, it’s a strong example of sustainability being a revenue generator rather than a cost center!
How can users make their AI processes more power-efficient?
It’s not enough for users to rely on datacenter operators and equipment manufacturers to reduce energy consumption and carbon footprints. All organizations need to be mindful of energy consumption and ensure their business is sustainable by design wherever possible.
To give a hands-on example, AI model training is rarely sensitive to latency, because it’s not usually a user-facing process. This means it can be done anywhere and as a result, should be done in locations which have a greater access to renewable energy. A company that does model training in Canada rather than in Poland, for example, will have a carbon footprint approximately 85% lower.
At the same time, it’s important to be pragmatic about AI infrastructure. According to Intel PCF / OVHcloud LCA, an NVIDIA H100 has a cradle-to-gate (manufacturing) carbon footprint approximately three times higher than an NVIDIA L4, reinforcing how important it is for organizations to understand which GPUs they need for the job.
In many cases, the latest GPU will be important – in particular, when organizations are trying to bring applications to market quickly – but in some, a lower-spec and more sustainable GPU will do the same job in the same time.
AI sustainability: an exercise in attention to detail
Overall, there’s absolutely no doubt that our power and resource consumption is going to increase in future; that’s the price of progress. What we can do is ensure we set a precedent to make every single part of our AI supply chains and processes as efficient as possible from the get-go, so that future developments also incorporate this into their standard operating procedures.
If we can make fractional gains wherever possible, they’ll add up and make sure that today’s needs don’t compromise the world of tomorrow.
We feature the best cloud computing providers.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.