If you’ve been following our writings over the past year, you’ve probably noticed that here at Tilt, we’re fully aboard the AI train. If an LLM could be trained to bring us cups of tea, I’m pretty sure we’d have figured it out by now. Alas, we’re not there yet – so at least in this respect, we’ll just have to rely on our good ol’ teasmade.
In almost every other respect, though, AI has become integral to what we do. From streamlining workflows to driving innovation and supercharging creativity, AI is part of the team. Check out how we’ve used it in some of our recent projects:
But we’re not the kind of team to charge ahead without asking tough questions. Naturally, as champions of sustainability (read more about our commitment to B Corp), we couldn’t ignore a vital question: what’s the environmental impact of our AI usage?
That’s where the Tilt AI Focus Group comes in – a spirited band of eco-conscious Tilters with a mission to uncover the carbon footprint of our AI efforts and explore ways we can minimise it. Armed with our curiosity, a passion for sustainability, and a commitment to doing better, we’re tackling the challenge head-on to ensure Tilt’s AI-driven future is as green as it is groundbreaking.
How exactly will we achieve this? Good question – and one we’re still working out.
Our scope is deliberately narrow – at least to begin with. We are focused solely on the carbon footprint of our direct AI usage. For this reason, we won’t be considering the environmental impact of training the models behind the products we use. In much the same way we wouldn’t factor in the carbon footprint of our parents or our past education when estimating the energy usage of our daily operations at Tilt. You get the point.
Furthermore, we’re deliberately excluding AI tools embedded in products whose core purpose is not AI-related – for example, Photoshop’s generative fill tool. Separating AI’s energy use from the core functionality of the software is simply unfeasible.
So, with our scope defined, we need to figure out exactly how to do this thing.
To help enlighten us and offer some much-needed advice as we embark on our journey, we invited Professor Thomas Nowotny, Head of the AI Research Group at the University of Sussex, to chat with us.
Thomas and his colleagues are actively researching bio-inspired AI – that is, applying principles and insights from biology to artificial intelligence – in an effort to make large language models more efficient and less energy-intensive.
He kicked things off with an eye-watering statistic from The Register: training GPT-3, the engine behind the original version of ChatGPT (we’re now on GPT-4o), consumed as much energy as driving a car to the moon and back. A claim based, I suspect, largely on assumptions and estimates, but attention-grabbing all the same. Either way, the point is clear: a huge amount of energy was required – and presumably, even more energy has been used to train its successors.
And that’s just the start! Once launched, these models continue to consume energy through usage. As the user base for services like ChatGPT, Gemini, and countless other AI products grows, so does their power consumption. Thomas estimates that transformers – the architecture supporting large language models (LLMs) – increase in size, and consequently energy use, by about 750% every 2 years. He calculates that if this trend continues, within the next 5 – 10 years, we could be using all the energy produced on Earth solely on AI. Clearly, this trajectory is not sustainable.
In contrast, the human brain uses just 20 watts to perform many of the same tasks – and in many cases, humans still outperform AI. It seems logical, then, that if we can model algorithms on the brain we might make AI significantly more efficient. However, without delving into all the detail, it turns out this is actually quite difficult to achieve. Who knew? The potential payoff, though, if we can achieve it, would be nothing short of phenomenal.
A great deal of research is still needed in this field. While it’s fascinating, it doesn’t directly address the task at hand. So, we asked Thomas what advice he could offer us.
While he did offer some advice, he also confirmed what we’ve discovered ourselves: finding publicly available and transparent data from companies like OpenAI is likely not possible. He suggests instead that open-source models may be a more fruitful source of information.
One suggestion Thomas put forward was to use smaller models. For instance, the Gemini models are smaller and potentially more efficient than the GPT models. In fact – though, not for this reason alone – we have already decided as a team to switch from using ChatGPT to Gemini.
It’s clear that our first step is knowing exactly what we’re measuring. To tackle this, our brilliant Creative, Xiyan, devised a survey for just this job. Her survey hopes to achieve two key objectives: identify our most-used AI tools and determine how frequently each Tilt team member uses them.
We already have a good sense of the tools we use most frequently, and the survey results will either confirm or challenge our assumption. The bigger question is how often we’re actually using them – a key metric for calculating the energy consumption of our AI usage.
We’ve also allocated time for one-on-one interviews with selected members to dive deeper into their AI usage. While this may be the easier part of the equation, I suspect even our own usage stats will ultimately rely on estimates. Usage on some platforms, like Midjourney, can easily be extrapolated, while others – such as ChatGPT, which I suspect is our most used tool – can be a little more tricky to estimate.
Once we have an understanding of our usage, we can tackle the tricky task of correlating the images and videos we’ve generated, along with the queries processed with the energy consumed – and, ultimately, the carbon emitted.
Although much of the data is not publicly available, there is still enough available information to make an informed estimate. As Thomas points out, it’s not hard to estimate the typical number of GPUs in a data centre – usually between 100 and 300. Since Nvidia hardware is documented, we can refer to their own resources to identify the power rating of each GPU model. From there, it’s a “simple” equation: power x time = energy used.
While the outcome of our number crunching will undoubtedly be estimated, it at least provides a springboard from which to jump. From here we can analyse the results, identify actionable insights and, explore ways to reduce our impact. That could be through reducing redundancy by consolidating tools, optimising our usage or, very likely, engaging in an offsetting program. Our research, though imperfect, will give us greater insight into our environmental impact and allow us to put forward practical solutions for a greater good.
And who knows where we’ll go from there.
The issue of AI-related carbon emissions is far from unique to Tilt, whether you’re an individual experimenting with AI tools to enhance your workflows, or an organisation like ours with integrated and evolving AI processes woven into our daily practices, we encourage you to consider the environmental impact of your choices.
However, we are just one tiny piece in a much larger puzzle. The conversation around sustainable AI must necessarily include not just Governments but the tech giants behind these AI tools. By sharing our journey, however imperfectly we’ve travelled it, we hope to open up the conversation, inspire others to look at the impact of their AI usage, and push for putting sustainability at the heart of this growing industry.
SHARE: