A little over a year ago, our wonderful People & Culture Manager, Melanie, had an inspired idea. As well as doing good just for the sake of good, we could actively reinforce our commitment to our B Corp responsibilities by addressing the environmental impact of our AI usage. So, we formed a merry band of creatives, thinkers and, well, me — first to figure out how to assess our usage, then to work out the environmental impact, and finally to decide on how to address it. And so, the inaugural AI Focus Group was born, with the simple but also not-so-simple mission: uncover our AI carbon footprint and do something about it.
What we set out to do
As with all tasks where you have more enthusiasm than expertise, we started out broad and, I’ll be honest, a little vague. After several group discussions and one genuinely eye-opening chat with Prof. Thomas Nowotny from The University of Sussex, we began to sharpen our focus, eventually ending up with something resembling a working plan.
We agreed on a few explicit limitations to our scope. Firstly, it would be very difficult, if not downright impossible, to get exact numbers and, as such, this would be an exercise in estimation rather than precision. Secondly, at least for now, we would focus only on what lies within our control, in other words, our own use of AI tools and not energy used in the prior training of the models behind them. And with that, our scope was set and we were ready to crack on.
How we did it
Our investigation began with a company-wide survey to identify exactly what tools everyone was using. Since then, we — and by we I actually mean our Head of Creative Production and Innovation, Ivor – have undertaken the Herculean task of reading privacy policies, usage terms, and a whole lot more in an effort to establish our own working AI policy. The result? We now have AI guidelines and processes in place, making it considerably easier to track the tools everyone is using.
Armed with a list of our most used tools — or at least the Tilt sanctioned ones — it became significantly easier to work out how to get actual numbers. Many of the tools we use offer usage stats, although in some cases they are a little limited. From here, I was able to start gathering real numbers, even if they were a little holey.
But a significant challenge still remained: how to turn a disparate collection of usage numbers from different platforms into a coherent calculation of energy consumed — and from there, into an estimation of our carbon footprint?
What we found
The uncomfortable truth is that there is generally a lack of transparency in the industry. Most platform providers don’t publicly disclose energy consumption data, and some won't even reveal the data centres they use, making it very difficult to pin down exact figures. But we weren’t about to throw in the towel just yet — using industry benchmarks we were able to work backward, drawing on publicly available research, including both the 2023 joint Hugging Face and Carnegie Mellon University study as well as the more recent 2025 Hugging Face video study, to establish a methodology using typical industry data.
Perhaps the most crucial decision was which conversion factor to use when translating energy usage into carbon output. Carbon output varies significantly between locations, depending on factors like the local mix of renewables and fossil fuels, and since most providers aren’t forthcoming with where their data centres are, we couldn’t be location specific. Instead we used a flat rate based on the U.S. average: roughly 0.81 pounds of CO2 per kWh (based on preliminary data for 2024/5 from the U.S. Energy Information Administration).
The breakdown
But first, a disclaimer. For all platforms, I have based my calculations on the most recently available model. Since this is ultimately an exercise in estimation, using a consistent benchmark across all tools feels like the most comparable approach — even if the specific model varied throughout the year.
Starting with LumaLabs: January 2026 is the first first month I was able to gather complete data. So, extrapolating our January usage across the year, using the current Ray3 model which costs 330 credits for a 5-second HD video, and the Hugging Face benchmark of 0.94 kWh (about the energy of running a microwave for an hour) per five-second video clip, our carbon emissions for 2025 is roughly 0.58 metric tonnes.
For Runway, once again using January 2026 as our benchmark for our credit usage and the Gen-4 model which costs 12 credits per second (60 credits for 5 seconds) gives us around 0.15 metric tonnes.
According to the 2023 Hugging Face study, image generation averages around 0.003 kWh per image (depending on the model). Therefore, our lifetime image generation (as of January 2026) for Midjourney gives us approximately 0.07 metric tonnes.
Higgsfield is more difficult as it aggregates multiple third-party AI models. To further complicate things, it has 2 usage tracks: a credit system and unlimited use of certain models as part of the subscription. To be perfectly frank, the unlimited aspect is simply unquantifiable. While I could find some credit-specific data, it’s still difficult to get a clean number of credits per generation.
Using a combination of verified pricing data and third-party testing, the estimate comes in at 771.7 kWh for video and 3.42 kWh for images. That’s around 0.28 metric tonnes though the limitations here are significant.
ElevenLabs usage presents a new challenge with no published data on energy consumption, and limited publicly available research on text-to-speech. Fortunately, Elevenlabs tells us exactly how much audio we generated: 35 hours, 17 minutes and 52 seconds. Given text-to-speech is clearly computationally heavier than text generation though not as heavy as image generation, we get a range between 0.001 and 0.01 kWh per thousand characters. Generating high-quality voice synthesis with emotional nuance and natural intonation is a genuinely more complex action, so a conservative estimate might be 0.015 kWh per minute, giving us 0.012 metric tonnes.
Gemini is perhaps the most difficult tool for gathering data. Google only provides numbers of conversations started, which is only slightly more helpful than no data at all. On top of that, they only allow access to usage figures over the last 30 days. So, there are a number of barriers both in respect to flimsy data as well as human inertia.
Google estimates that the average prompt in Gemini uses just 0.00024 kWh of energy but without prompt counts, this isn’t directly usable. Based on my own behaviour, i.e. long, multi-prompt interactions, and deliberately overestimating, I have assumed 20 prompts per conversation and triple the median energy. This results in approximately 0.07 metric tonnes but let’s be very clear that this is not a precise estimate.
And with that, we have a running total of 1.16 metric tonnes. But, it is worth noting that there are gaps in the data and missing pieces. There is the Higgsfield unlimited usage track, not to mention incomplete data samples have been extrapolated across an entire year. To account for these, and any other unmeasured usage, we’ve applied a buffer resulting in a total of 2 metric tonnes for Tilt’s AI carbon footprint in 2025. Let’s call this an order-of-magnitude rather than a precise total.
What we did
We didn’t start with the assumption that we would offset our carbon footprint. However, as a creative agency, high-quality output, experimentation and innovation are central to our work. It was clear that attempting to reduce AI activity in any meaningful way would go too far. So, in truth, offsetting was always the most practical option.
We’re fully aware of the controversies and shortcomings of offsetting. That’s why, while I was busy gathering the data (and wrestling with calculations), Mel focused on researching credible options. We wanted a programme with real benefits, ideally local, so the surrounding community could benefit directly.
That led us to the South Downs National Park ReNature project. The South Downs National Park Authority is running a scheme to increase the percentage of land managed for nature from 25% to 33% by 2030. It is primarily aimed at developers needing to achieve mandatory Biodiversity Net Gain targets, however, it also allows individuals and businesses to voluntarily participate.
As a Brighton-based company, the idea of giving something back to the local landscape felt like the right fit. So, we’ve chosen to offset that 2 metric tonnes through the purchase of ReNature Credits. That’s a 30-year commitment to managing land for nature, right on our doorstep.
What now?
While this might seem like the end, it is in fact just the beginning. Our figures are imperfect, our methodology has gaps, and we're under no illusion that this is a solved problem. But we now have a baseline, a process, and a commitment. Considerably more than we had at the start of 2025.
This year, we’ll do it again with better tracking, and in turn, better data, and the benefit of having done it before. Over time, our goal is to refine the methodology, close the gaps, and hopefully, land on more solid numbers. We’ll start to look at trends in our behaviour, giving insights that might spark new ideas for reducing and offsetting our footprint.
If you’re using AI, which seems likely as it’s become hard to avoid, we’d encourage you to think about your footprint. Take it from us: you don’t need to get it right, you just need to get started.