Anne Currie is the co-author of the acclaimed O’Reilly book “Building Green Software”, a pillar of the GSF, a veteran in the Cloud Industry and also a SF novelist with her series of panopticon books.
Preparing her forthcoming keynote at Green IO London, she went all the way down into the rabbit hole of AI and energy efficiency. She investigated from OpenAi to DeepSeek and open source models, what a software developer using these models can and cannot do to reduce energy consumption, and so on. In the second part of this episode, Anne Currie et Gaël Duez discussed the East compute / west data Chinese strategy, picking the cheapest AI model today and ... tomorrow, learning from python to forecast trend in open source, 4 questions to ask when choosing an AI powered product, the case for Wright’s law in open source, and much more!
Anne Currie is the co-author of the acclaimed O’Reilly book “Building Green Software”, a pillar of the GSF, a veteran in the Cloud Industry and also a SF novelist with her series of panopticon books.
Preparing her forthcoming keynote at Green IO London, she went all the way down into the rabbit hole of AI and energy efficiency. She investigated from OpenAi to DeepSeek and open source models, what a software developer using these models can and cannot do to reduce energy consumption, and so on.
In the second part of this episode, Anne Currie et Gaël Duez discussed:
- the East compute / west data Chinese strategy
- Pick the cheapest AI model today and ... tomorrow
- Learning from python to forecast trend in open source
- 4 questions to ask when choosing an AI powered product
- The case for Wright’s law in open source
And much more!
❤️ Subscribe, follow, like, ... stay connected the way you want to never miss an episode, twice a month, on Tuesday!
📧 Once a month, you get carefully curated news on digital sustainability packed with exclusive Green IO contents, subscribe to the
Green IO newsletter here.
Learn more about our guest and connect
📧 You can also send us an email at
[email protected] to share your feedback and suggest future guests or topics.
Anne's sources and other references mentioned in this episode
Transcript (auto-generated)
Anne (00:01)
But the likelihood is you're doing most of your work through a hyperscaler. You're doing it through AWS. You're doing it through Google. You're doing it through. And the answer there is the answer that all of us in the industry have been saying for nearly 10 years now, which is that you've got to demand it. You've got to go to your supplier, your AWS and say, look, I want AI that runs on renewables. What is your story here? And how do I make sure that my AI is trained and at the inference runs on renewables? Because you can't really make that change yourself. You can only get them to make it. But if you do say I want it and this is part of my decision making process, they will make.
Gaël Duez (00:43)
Hello everyone, welcome to Green IO! I'm Gael Duez and in this podcast, we empower responsible technologists to build a greener digital world, one bite at a time. Twice a month on a Tuesday, guests from across the globe share insights, tools and alternative approaches, enabling people within the tech sector and beyond to boost digital sustainability.
Do I really need to introduce Anne Currie to you? She's the co-author of the acclaimed O'Reilly book Building Green Software, a pillar of the Green Software Foundation, a veteran in the cloud industry, and also a science fiction novelist with a series of Panopticon books, And we will have the honor of having her as a keynote speaker at Green IO in London on September 24th, where she will talk about AI and its energy consumption. And there's something to know about Anne. She's a thorough speaker. When you give her a bone, she will go all the way down into the rabbit hole. This is exactly what happened on this topic of AI and efficiency.
This is the second part of this episode. In the first part, Anne provided quite a lot of context on the generative AI momentum and also why DeepSeek was a pivotal moment in AI. Not necessarily because of the energy efficiency of the model itself, but for the interest it raised on the topic. The second feature, which made DeepSeek's launch so pivotal, was its licensing policy. DeepSeek being an open source model, it led to many implications for potential future efficiency gains. But these implications don't come without issues.
Anne (02:31)
DeepSeek has not solved the problem. There are two things that we need to be thinking about. The first is that efficiency will mean that we just use a lot more of it, which doesn't necessarily mean there's a problem. But it doesn't necessarily mean that in and of itself on its own is going to solve the problem. And that's the usual thing that always comes up, Jevons paradox. Now, I'm a big believer that Jevons paradox could be fixed and Jevons paradox is the definition of economic growth in many ways. But ⁓ I think there's a different problem here that will be solved in a different way, which is to go back to what I said before, hyperscalers gonna hyperscale. That's the AI scaling law where AI will just keep getting better as you pour more more money into it did not go away. We decided, know, a certain enterprises, vendors said, you just don't need to keep going up this. We're happy here. Just let us off and we'll take the roots, the DeepSeek roots and all of the models that have been spawned by DeepSeek. We'll take that route because it's good enough now. And we don't need to go, we don't need to keep going up. But the hyperscalers, the OpenAlias of the world, the Googles of the world, good simultaneous translation in headsets was never going to be enough for them. They wanted, and they've always wanted, or have wanted since the AI scaling laws became obvious to them, they wanted AGI, artificial general intelligence. They just want more and more clever models.
Gaël Duez (04:16)
Yeah, but that's. Sorry to interrupt you, Anne but that's absolutely fascinating the way you present things from more a marketing perspective than a technical perspective. Because if brute force approach was the best interest of hyperscalers, because hyperscaler, the scale, as you say, the moment we've actually proved that a significant chunk of the market doesn't want more, which threatens the very essence of the business model. Could we allow ourselves to say that they elaborate something to justify the endless growth to capacity.
Anne (04:57)
Yeah, so We're now in a space race, a kind of semi-militaristic race for who's going to have the biggest artificial brain. as a sci-fi writer, this is just astonishing. This is still the idea that if we can turn electricity into thought, and we can turn enough electricity into some amazing thought and it will just become clever and clever and clever. And the first person who gets that will have the biggest brain working for them, although will it be worth it? Anyway, whether or not this is the right thing to do, I would suggest that it isn't, but I don't think that any of ours is going to stop that happening. The hyperscalers, Google, OpenAI are going to compete to produce this enormous at X, X AI going to keep competing to produce this this big brain, this big artificial brain. ⁓ But it one thing has happened that I think we absolutely do need to talk about here. Sorry, I realized there's so much stuff here. But anyway, the ⁓ other thing that's happened is that, yes, we've got all of this interest in just scaling up forever and producing the mega brain. But we've also got now multiple strategies in play for how that, so the goal is the mega brain. There are multiple strategies out there for how that mega brain will be produced. Some of them are aligned with the energy transition. Some of them are alignable with the renewals. And some of them are currently at the moment not, but even where they're not, there are knock on the benefits for the renewable ⁓ transition. So I would say the three key ⁓ strategies are the American strategy brute force. They've started to build ⁓ gas-fired power stations as well as started looking to nuclear and stuff. So the American strategy is a green back strategy for it's, they just, have all the money in the world, you know, they are going to spend it. So they'll spend money to win. And that is not in any way aligned with the energy transition. But I think there are two other interesting strategies going on, two global kind of geopolitically huge strategies going on. The next is the Chinese strategy. And China have been very clear publicly clear about what their nationwide strategy is. And it's called something as well, partly as the open source, which is pretty amazing. So like, get everybody's eyes together, everybody's eyes on this, that's part of their strategy. But the other part of their strategy is something called East data, West computing. Have you heard of that at all?
Gaël Duez (07:53)
Never. Never.
Anne (07:54)
Oh, Everyone should follow me on LinkedIn, because I do actually try and keep on top of what's going on here. The blog about and talked about it a reasonable amount. The East data West computing strategy for China is like, well, we want AI, we want giant data centers. But we want them in a way that is actually realistic and not ridiculously expensive. We don't want to have to spend huge amounts of money on it. So what they say is, well, we've got a big country and most people live on the East Coast. All the big cities are on the East. The producers of the data are on the East Coast. And we don't want to build the data centers on the East Coast competing for land and water and resources and everything else and electricity with those cities. So our plan is that the cities will grow in the East. The data centers will all get… built in the West, where we've got huge areas of land where there's not much going on. We can build giant wind farms, we can build solar, we can build nuclear power stations, we can build hydro. We'll do all of the processing of data over in these giant data centers on the West. All the people will be on the East. And what we will exchange between the two will be data. Not electricity, just data. All the huge amounts of data produced by the people go to the West and then all the results of analyzing that data will come back to the cities. ⁓ And it's a much easier and cheaper to build networking infrastructure than it to build electricity infrastructure or any other kind of infrastructure. So that I think that's an excellent strategy and not one that we talk about anywhere near as much as we should do. I keep trying to say build your data centers in Scotland, ⁓ that is the UK equivalent of. ⁓ Obviously, China is on a little larger scale. And if they're attempting to do it across the whole of China, we should really be attempting to do it at least within Europe. So there, that's a strategy which is aligned with renewables. Because the whole point is that you can build all of these things in the West where there's little space. The other strategy, which I also think is an excellent strategy, ⁓ is the one coming out of India. So in India, they have they are building a strategy which is very aware of their context. And the context in India is there's a lot of sun in India, there's tons and tons of solar. So in India, they've already built at least one AI data center, which is solar and battery powered. why pay for what, you know, live with what we've got. So solar is their strategy.
But the other half of their strategy, which I really like, is the use of time of use tariffs on electricity. So India has really gone all in on time of use tariffs for electricity. They've said, look, if you're using electricity while the sun is shining, it is and should be to you 10 times cheaper than if you attempt to use it when the sun isn't shining. So try and steer everybody, all their industry, all their consumers, all their users, to do the energy intensive tasks while the sun is shining and stop doing them or do lot less of them, gray out, really dial them down because you'll be paying 10 times more for them when the sun isn't shining. So again, that is a strategy that is very aligned with the renewable transition. It's saying the energy of the future is solar. We need everybody to be using it as it is generated and using fewer batteries because you don't want to have to, it's impossible to, it'll be incredibly expensive and incredibly resource intensive to buffer everything in batteries so that you can just use electricity at any time, whether the sun's shining or it's at night. It's so much more efficient and so much better and so much more resource efficient to just get used to using your electricity when the shining which then gets back to your point before.
Gaël Duez (12:04)
I think we've got lost a bit with the strategy of the green transition in AI, but it makes a lot of sense. And I've never heard about the East data West computer approach or the approach from India. And you know, this is something I'm always looking for, like to have a global perspective on all green IT related topics and not to be too much Western centric. So thank you so much for this. Now, going back to these strategies and this AI and energy efficiency approach. So now that we've seen this country level strategy, what should, according to you, enterprise level strategy? By that I mean, if you're a leader, don't really matter if tech, product, financial leaders in your company, what should be the strategy you should have to make sure that you use the good AI, well that's a very wide topic, but also the most energy efficient AI.
Anne (13:05)
So it's an interesting one. What do you want to do with it? Have a think you're attempting to do. In the end, need AI to be aligned with the energy transition, which means that you need to be doing your training where the sun's shining or the wind's blowing and less when it isn't. But the likelihood is you're doing most of your work through a hyperscaler. You're doing it through AWS. You're doing it through Google. You're doing it through… ⁓ through Microsoft or Alibaba, indeed, or Alibaba. ⁓ And the answer there is the answer that we always, know, we, two of us, all of us in the industry have been saying ⁓ for nearly 10 years now, which is that you've got to demand it. You've got to go to your supplier, your AWS and say, look, I want AI that runs on renewables.
Gaël Duez (13:34)
Or Alibaba,
Anne (13:59)
What is your story here? And how do I make sure that my AI is trained and at the inference runs on renewables? ⁓ Because you can't really make that change yourself. You can only get them to make it. But if you do say I want it and this is part of my decision making process, they will make changes. mean, everything was slightly blown away by the AI scaling laws coming on board, but they still know that everybody does care about running on renewables and they want, they care running removals because they're a lot cheaper. But you have to keep pressing your suppliers to do this work for you because I don't think there really is or choose a supplier based on their story here. Go and say, what's your story? What's your plan? I'm choosing you based on this. Making a silent better choice is not really actually of that much use here. You need to make a noisy choice. You need to be going to your suppliers and saying, “This is what I care about. What are you doing about it?”
Gaël Duez (15:03)
Okay. I got it from an operational perspective. However, what about choice of models, I think in many corporations at the moment, you just… chase the winner like ChatGPT is everything at least in the Western world so we must have our ChatGPT use case and how can we help make better educated decisions? Because we've obviously discussed a lot about efficiency and reducing the energy intensity of both the building phase inference phase. But there is also the case of a good watt is a negative watt. mean, a good watt is a watt that we don't consume, how this discussion is truly happening and is it only happening on a cost based like it costs literally 10 times less dollars to train the model And I'm realizing my question the place now, but it goes back to how mature are most of global decision makers in corporations to make the right decisions when it comes to choosing AI models and not necessarily following the hype I would say.
Anne (16:16)
Well, so it's always a very, it's always an interesting one, this. I think that there is an incredible alignment on, as we talked about earlier, And ⁓ cost is a good proxy for energy use for electricity use. So every, there's almost, there are no, almost no enterprises who are not thinking, well, just a minute, what's the cost here? If I want to use a model in some tool that I'm running myself, what's the cost going to be? Because and 10X does make a difference. 10X is something that will change behavior. So I suspect all enterprises just need to start thinking about what's the cheapest way. This is what, forget green. What's the cheapest way for you to get what you need? And not just, and expect it. And remember that the open source stuff where there's a very active community of people who are trying, keeping trying to reduce the costs is going to be the one that ends up being cheapest. the interesting thing, ChatGPT did come out with an open source model ⁓ this month, think, wasn't it? Which was nice to see, because I think because a lot of the work, ⁓ you mentioned Hugging Face have been shouting about the fact that there is now an open source community who are really keen to work on these things and need models that are available under permissive licenses, not massively restrictive licenses. So although ChatGPT at the moment is very inefficient. Those folk will be looking over at what ⁓ DeepSeek has done and the various other models since then that have taken what DeepSeek did six months ago and made it even better. And they'll be saying, well, how can I apply this to my favorite model, which is ChatGPT? So I think for enterprises, ⁓ go for the cheapest, that's your North Star, is go for the cheapest way of doing it. ⁓ But you're to have to think about what's cheaper now, but also what's going to be cheaper in a year's time, in two years time, in three years time. I've had many conversations with people who've said things like, well, I chose this green technology. ⁓ I think it was Julia versus Python. ⁓ that Julia was always a really super green technology. ⁓ But it was quite niche. There weren't so many people working on it. Python has just got more and more efficient over time, just because it was a much bigger community with much more people working on it, much more people willing to invest in it. So although chat GPT, their open source model is not so efficient at the moment, my suspicion is that they'll have a much more they'll have a very active community of people improving it. So it's hard to say. I say it's early days, but I would be tempted to say, well, it's interesting that, well, what we might want to do is learn from Python. So with Python got much, much better. So people who had chosen Julia, which was better at the time, didn't get those benefits. But people who stayed on earlier versions of Python also didn't get those benefits. Python has turned out to be very difficult to upgrade through the to the new releases, which are much more efficient than the earlier versions were. So you need a model that is going in the right direction for you, where there's a very active community who care about what you care about, and you want them to care about it costing less money to run it. And you need to be able to, you need them to be focused on making it easy for folk to upgrade through the new models as they arrive, because there's no point in having some new model that's amazing, but you can't get there because it's impossible for you to get from where you are on model version one to model version 10, which is a hundred times better.
Gaël Duez (20:23)
Well, that's super interesting because let me wrap it up here. If I'm a decision maker, CTO, CPOC or whatever, et cetera, and I'm faced with choices regarding AI powered product, I would say. There are actually several questions that I should ask if I'm following your thought here. The first one obviously is what is the cost? Cost of today cost of tomorrow because the problem or the opportunity with AI powered product is that most of the time they're scale. So how much does it cost me today? How much will it cost me tomorrow? But that's question number one.
Anne (21:01)
See you again. But not just because you scale it, also because is that model going to get better over time or worse or stay the same?
Gaël Duez (21:10)
And that's actually the second where does the engine comes from? Is it come from an open source community? And how big is this open source community, which is a good proxy of actually what you mentioned, which is how much better will this model comes over time? But then you need to also to ask a of a technical questions, how easy this model is to actually get scaled and how I can easily upgrade from one model to another. So it's already three big questions I would say that anyone should ask whenever presenting an AI powered product. Am I right?
Anne (21:47)
Yes, yeah, absolutely. It's not easy and it's quite early days. But I would say call your mind back to rights law, which we talked about a little bit earlier. Things get better the more they're used. So things that get a lot of use and are open source, the sweet spot is there's loads of users, there's loads of demand, there's loads of ability to scale it up and get more more users using it. They care about it being cheaper and it's open source. So lots of people can work on improving it that will get better according to rights law that you know as things get used more they get better that will get better that is the python lesson but you
Gaël Duez (22:28)
But that's a super interesting point because that means also the fourth question should be, folks, what are the feedback loop mechanisms that you're putting in place to get every early stage signal that I need? Because we're not that sure where we're going. So I want to be sure that I'm riding the Python code horse rather than the other one and making sure that I'm using an open source. AI model, which will actually keep on growing the community. maybe at some, it should be something as trivial than how many commit did we have on this, you know, open source model for the last month or the last quarter. And if we see the number drop, then it should, you know, trigger some sort of red flag saying, my God, we've invested quite a lot on this product powered by this model. And this model, the community is shrinking. It is a red alert and we should really pay attention to eventually. Migrating before it's too late. that's actually a fourth point, which is super interesting that you raised.
Anne (23:27)
Yeah, and of course, you've got to you have to have the ability to migrate. So this comes back a little bit to to best operational practice. If you can't change, there is no good there is no single great decision you're going to make today, but is going to to work forever from now on. ⁓ With AI, everything will change. mean, just like the the timeline that I've just given you. It's an accelerating timeline. A load of stuff happened. You know, we started in 2017, but then massive stuff's happened in 2025. It's an accelerating timeline. there's no model that you can choose today, which is going to be guaranteed to be a good choice for the rest of the for the next 10 years, it is going to change on a moment by moment basis. And, and you maybe, you know, you don't have to that doesn't necessarily mean you have to move on a moment by moment basis, but but at some point, you are going to have to move. And Python, we're saying, Python was a good choice, but it was a good choice for some people because they kept the ability to change and they migrated to the new versions of Python. But it's a ⁓ major issue. The hyperscale, as I mentioned this to me in the past, how much waste there is on ⁓ the hyperscale systems because of customers who are running old versions of Python they can't migrate away from. That is just five or 10 times less efficient than a modern version of Python. And all those CPU cycles are just waste. All those carbon emission is just waste. And that's only because they don't have CI, CD, don't have automated testing, they can't upgrade, they can't change.
Gaël Duez (25:11)
Comes back to the quality of your entire tech stack and how agile it is. Not the fancy wording of agile, but like truly agile, like the ability to ship code efficiently and almost automatically for sure. Anne
Anne (25:26)
Absolutely.
Gaël Duez (25:28)
There is one last question that we need to address, which is how we measure things, because we mentioned several times, 10 times more efficient, cetera, et cetera. But according to you, in this very blurry landscape of measuring AI even only related to energy efficiency, is in carbon efficiency and water efficiency, it’s even more complicated. What are the trends? What do you see? How comfortable are you when you say for instance, DeepSeek is 10x more efficient, whatever, etc. What did you notice when you researched this topic?
Anne (26:10)
There are numbers all over the place, I would say. And the thing is that in different use cases, the situation is different. So it might be 10x better, maybe it's 10x better in some cases, but it's not in others. It's complicated. I don't think that coming up with a... I would say 10x is useful because it kind of indicates that it is a significant amount or in some, at least some… useful use cases a significant amount. ⁓ But as I saying, 10x doesn't last forever. And it doesn't mean anything. So 10x. now DeepSeeker is saying 70 times for some particular use cases, 70 times more efficient. Is that a common use case? ⁓ It's kind of like, that is that what everybody's going to be doing? Is it not? It's ⁓ but it is it does suggest a direction of travel which at the moment I think the only thing we can do is go is say what's the direction of travel and is that the right direction of travel for us? And I'm going to say over and over again, something that I always say, which is if in doubt, just measure cost and just care about cost. Because, know, time of use tariffs are going to come in. ⁓ India is really leading the way on that. that and Spain as well have introduced it. are a couple of European countries with time of use tariffs.
Once those are globally accepted, which they eventually will be, then carbon and cost will be very, it will be nicely aligned. So if you kind of get your head around the idea that there are going to be time to use tariffs and you define everything that way and you get used to measuring in terms of cost of running your systems, you will eventually align with the energy transition, is my thinking on that.
Gaël Duez (28:07)
Yeah, it makes sense. I mean, you're very optimistic. I meet from time to time people which are a bit less optimistic than you are. But at least from the energy transition perspective, it makes sense to have also this approach and in this alignment on cost and carbon efficiency. I would add maybe to this that when it comes to being comfortable with the numbers that are thrown all over the place because this is really what it is, that it's basically know your context and your use also something that concerns me regarding the ability to get accurate figures is that the way even the inference phase is run today is getting more and more complex. You mentioned in our previous discussion the chain of thought approach, which is that now even you throw the tokens to the model and the model will not immediately answer you back, but they will pause and take the time to redo the calculation even sometimes access to other specialized tools. We see now that some models, they're also browsing the web to get confirmation of what they emitted as a first answer, cetera, et cetera. even the use phase is getting more and more complex. And I think in that case, it's really, really important to know which use case we're talking about, which context we're talking about. And that sort leads for me to a call for regulation or at least common norms, because it's almost impossible otherwise for developers or users to really understand what is the cost, the energy cost of this specific use case, this specific request. I I got it with the trend of follow the money. It's a good one, but to be able to compare, I think we need a bit more transparency as well, because things are getting so complicated that even someone truly willing to provide the information from one of these big companies will struggle to access the data.
Anne (30:06)
Yeah, it is. is really, really hard. I totally agree with you. It's not an easy question to ask, answer really. Yeah. I think you can only really do kind of compare for yourself. Is it best, is my use case better or worse this week than last week after this change and before this change?
Gaël Duez (30:23)
Yeah, which could be a good approach. anyway, thanks a lot. I mean, that was an amazing discussion. We went through all the history of generative AI, which is a funny word to use, knowing that it's, I mean, less than five years.
Anne (30:39)
We didn't even do all of it. could have, could have, there's more interesting stuff in there to talk about, but yeah.
Gaël Duez (30:47)
Let's keep it for Green IO or London in that case. No, let's keep it for another episode. Maybe you will be the first one to join the podcast three times. I would say two final questions. mean, you've got quite a lot of resources being shared on AI and energy and AI and efficiency. Is there one specific resources that you would advise the listeners to follow on top of following your LinkedIn page.
Anne (31:19)
Well, this is what I say all the time. And I kind of think, well, nobody must, nobody could have not read Building Green Software because it's so much designed for all the listeners to podcasts that I speak on. But I will tell you that the numbers that sold do not reflect the fact that everybody who's listening to the podcasts that I host has read the book and they would enjoy the book if you are attending, if you're spending any time. Listening to this, read Building Green Software or listen to it on audiobook form because it is available as an audio book. it's all about the principles, you need to understand the principles and then work up from them because everything changes so much all the time. It's why in the book we tried to focus on the principles.
Gaël Duez (32:03)
Makes total sense. And maybe my final question, even if you already shared, I would say at least a positive mindset or optimistic mindset, do we piece of news to share with the audience related to sustainability and IT?
Anne (32:20)
Well, I I will just reiterate stat that I gave earlier that China is building a power stations, one gigawatt per day worth of new renewable power. And we all need to be looking at China and learning from what they're doing. They are on the rights. They're following Wrigh's law. They're learning by doing. And that is a huge resource that we all have to go and look and say, what is China doing? Let's do some of that.
Gaël Duez (32:51)
Okay, and as usual, I'll put the link of all these reports Anne, thanks a lot. It was great discussion. Really looking forward to seeing you in a few weeks in London. And that was an amazing job that you did investigating all of this trends and all of this history. So thanks a lot.
Anne (33:13)
Well, I was cursing you for a while because it was taking me so much time, but I'm really pleased that I know more about what's going on. It's something incredibly important these days and so I'm very pleased with it. So thank you very much for asking me.
Gaël Duez (33:17)
Yeah. Actually, you're more than welcome and actually I feel more blessed than cursed. Thanks a lot and see you very soon. Bye bye.
Anne (33:33)
See you soon, Green IO.
Gaël Duez (33:44)
Thank you for listening to this Green IO episode. Because accessible and transparent information is in the DNA of Green IO, all the references mentioned in this episode, as well as the full transcript, are in the share notes. You can find these notes on your favorite podcast platform and, of course, on the website greenio.tech. If you enjoyed this episode, please give us a thumbs up on YouTube or rate the podcast five stars on Apple Podcasts or Spotify. Sharing this episode on social media or directly with relatives working with AI is a great move to provide them with insights on this hot topic. You've got the point. Being an independent media, we rely mostly on you to get more responsible technologists on board.
In our next episode, we will keep on exploring the British IT sustainability landscape. After all, Green IO of London is in two weeks. And we will welcome Ross Cockburn from Reusing IT and Elaine Braun, the CEO of the Edinburgh Remarkery. And they will both explain to us why they hate recycling. Stay tuned. Either way, Green IO is a podcast and much more. So visit greenio.tech to subscribe to our free monthly newsletter and check the conferences we organize across the globe. As we already discussed in this episode, Green IO London is almost there. September 23rd and 24th are the dates. As a Green IO listener, can get a free ticket to any Green IO conferences using the voucher GREENIOVIP Just make sure to have one before the 30 free tickets are all gone.
I'm looking forward to meeting you there to help you, fellow responsible technologists, build a greener digital world.