Why don’t AI companies talk about energy usage?
Why AI’s environmental impact matters – and why the companies building it stay quiet

If you've used ChatGPT recently (and statistically, you probably have) you’re part of a global trend. OpenAI’s chatbot is estimated to be the fifth most visited website in the world, with more than 400 million users a week.
And that’s just one AI tool. As generative AI becomes embedded into apps, search engines, workplaces and daily habits, our interactions with large language models (LLMs) like ChatGPT, Google Gemini and Claude are only increasing.
We’ve become more aware of AI’s risks, from misinformation and deepfakes to surveillance and emotional dependence. But one of the biggest is AI’s environmental impact.
Running LLMs requires enormous amounts of electricity and water. These models consume energy not just during training, when they absorb and organize vast volumes of data, but every time you ask a question. That’s billions of queries a day, each one demanding computational power and adding to a growing environmental cost.
Why we still don’t know how much energy AI really uses
The truth is, we don’t know how much energy AI really uses and that’s a big problem.
Unlike most industries, AI companies aren’t required to report the environmental footprint of their models. There’s no standardized regulation or reporting framework in place for the energy use or carbon emissions tied specifically to AI systems.
There are a few reasons for that. First, the technology is still relatively new, so the infrastructure for this sort of regulation and reporting hasn’t caught up.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
But tech companies also haven’t pushed for it. That’s partly because AI is a fiercely competitive space. Which means that sharing energy data could inadvertently reveal details about a model’s size, architecture or efficiency.
It’s also technically difficult. AI systems are spread across vast server farms, multiple teams and shared infrastructure, which makes it hard to isolate and track usage.
Then there’s the optics. Companies heavily invested in the narrative that AI will only do us all vast amounts of good don’t want to be linked to sky-high emissions or the guzzling of finite resources.
So, with little transparency, researchers and journalists are left to estimate. And those estimates are alarming.
Here’s what we do know about AI’s energy use
Many credible estimates have been made over the past few years. But a recent report from MIT Technology Review offers one of the clearest pictures yet of AI’s growing appetite for electricity and water.
The report is filled with striking comparisons. For example, generating a 5 second AI video might use as much energy as running a microwave for an hour.
Even simple chatbot replies can vary widely in energy consumption. One estimate puts a basic reply at anywhere between 114 and 6,700 joules, which is equivalent to running a microwave for between half a second and eight seconds. But as tasks become more complex – like those that involve images or video – the energy cost rises dramatically.
According to the report, the bigger picture is even more concerning. In 2024, US data centers consumed around 200 terawatt-hours of electricity, which is roughly the same as Thailand’s entire annual consumption.
And that number is climbing fast. By 2028, researchers estimate that AI-related electricity use alone could reach up to 326 terawatt-hours per year. That’s more than all of the current data center usage in the US and enough to power more than 22% of American households annually.
In carbon terms, that's like the equivalent to driving more than 300 billion miles, which works out at about 1,600 round trips to the sun.
It's not just about power either. AI infrastructure also consumes vast amounts of water, primarily for cooling. In some regions, this adds strain to already stretched water supplies, which is a serious concern during heatwaves and droughts.
Experts say that one of the biggest challenges here is scale. Even if we had precise figures today, we’d still be underestimating the problem in a year or even in a month's time.
That’s because the way we use AI is evolving rapidly. Generative models are being built into everyday tools, from writing apps and customer service bots to photo-editing software and search engines. As this adoption accelerates, without a clear understanding of the costs, the environmental impact is likely to spiral much, much faster than we ever expected.
Here’s what needs to change and who’s stepping up
The good news is there is growing momentum to make AI more accountable for its environmental footprint. But right now, transparency is the exception and not the rule.
For example, the Green Software Foundation (GSF), a global non-profit organization including Microsoft, Cisco, Siemens, Google and other companies, is one of the groups leading the charge.
Through its Green AI Committee, the GSF is developing sustainability standards that are designed specifically for AI. This includes lifecycle carbon accounting, open-source tools for tracking energy usage and real-time carbon intensity metrics, which are all aimed at making AI’s environmental impact measurable, reportable and (hopefully) manageable.
Policy frameworks are also taking shape in some regions. For example, the EU’s AI Act encourages sustainability through risk assessments. While the UK’s AI Opportunities Action Plan and the British Standards Institution (BSI) are creating technical guidance on how to measure and report AI’s carbon footprint. These are early steps but they could help to inform future regulation.
Some AI companies are taking steps in the right direction too, investing in renewable energy, researching more efficient training methods and developing improved cooling infrastructure. But these improvements aren’t standard across the industry yet and there’s still no broadly accepted approach.
That’s why transparency matters. Without clear and open data about how much energy these systems consume, we can’t accurately assess the cost of AI or hold the right companies accountable. We certainly can’t build more sustainable policy or infrastructure around it either. Tech companies can’t keep asking us to trust in the future of AI while hiding the true cost of running it.
You might also like
Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.