Environment Variables
Real Efficiency at Scale with Sean Varley
July 24, 2025
Anne Currie is joined by Sean Varley, Chief Evangelist and VP of Business Development at Ampere Computing, a leader in building energy-efficient, cloud-native processors. They unpack the energy demands of AI, why power caps and utilization matter more than raw compute, and how to rethink metrics like performance-per-rack for a greener digital future. Sean also discusses Ampere’s role in the AI Platform Alliance, the company’s partnership with Rakuten, and how infrastructure choices impact the climate trajectory of AI.
Anne Currie is joined by Sean Varley, Chief Evangelist and VP of Business Development at Ampere Computing, a leader in building energy-efficient, cloud-native processors. They unpack the energy demands of AI, why power caps and utilization matter more than raw compute, and how to rethink metrics like performance-per-rack for a greener digital future. Sean also discusses Ampere’s role in the AI Platform Alliance, the company’s partnership with Rakuten, and how infrastructure choices impact the climate trajectory of AI.

Learn more about our people:

Find out more about the GSF:

Resources:

If you enjoyed this episode then please either:

TRANSCRIPT BELOW:

Sean Varley: Because at the end of the day, if you want to be more sustainable, then just use less electricity. That's the whole point, right. 

Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.

I'm your host, Chris Adams. 

Anne Currie: Hello and welcome to the World of Environment Variables, where we bring you the latest news and updates from the world of sustainable software. So I'm your guest host today. It's not, you're not hearing the usual dulcet tones of Chris Adams. My name is Anne Currie. And today we'll be diving into a pressing and timely topic, how to scale AI infrastructure sustainably in a world where energy constraints are becoming a hard limit. And that means that we are gonna be, have to be a little bit more clever and a little bit more careful when we choose the chips we run on. So it's tempting to believe that innovation alone will lead us towards greener compute, but in reality, real sustainability gains happen when efficiency becomes a business imperative when performance per watt, cost and carbon footprint are all measured and all have weight. So, that's where companies like Ampere come in, with cloud native energy efficient approaches to chip design. They're rethinking how we power the AI boom, not just faster but smarter. It's a strategy that aligns directly with Green Software Foundation's mission to reduce carbon emissions from the software lifecycle, particularly in the cloud. So in this episode, we'll explore what this looks like at scale and what we can learn from Ampere's approach to real world efficiency. So what did it take? What does it take to make an AI ready infrastructure that's both powerful, effective, and sustainable? Let's find out. And today we have with us Sean Varley from Ampere.

So Sean, welcome to the show. Can you tell us a little bit about yourself?

Sean Varley: Yeah, absolutely Anne, and thanks first for having me on the podcast. I'm a big fan, so, I'm looking forward to this conversation. So I'm the chief evangelist of Ampere Computing. And, I, now what that means is that we run a lot of the ecosystem building and all of the partnership kind of, works that go on to support our silicon products in the marketplace.

And also, build a lot of awareness right around some of these concepts you introduced. You know, all of the, you know, kind of building out that awareness around sustainability and power efficiency and how that also really kinda works, within different workload contexts and workload context change over time.

So all of those sorts of things are kind of in scope, for the evangelism role.

Anne Currie: That's, that is fantastic. So I'll just introduce myself a little bit as well. My name is Anne Currie. If you haven't heard the podcast before, I am one of the authors of O'Reilly's new book, Building Green Software, which I, as I always say, everybody who's listening to this podcast should read Building Green Software.

That was, that is entirely why we wrote the book. I'm also the CEO of the training and Green Consulting Company as Strategically Green. So, hit me up on LinkedIn if you want to talk a little bit about training consultancy, but back to the, back to the podcast. Oh, and I need to remember that everything we'll be talking about today, there will be links about it in the show notes.

So you don't need to worry about writing down URLs or anything. Just look at the show notes before. So, now, I'm actually gonna start off the question by harking, start off the podcast by harking back to somebody that we had on the podcast a couple of months ago. A chap called, Charles Humble. And his, the assertion that he was making was that we all need to wake up to the fact that there isn't just one chip anymore, there isn't a default chip anymore that everybody uses and is kind of good enough for the best in all circumstances to use. when you are, setting up infrastructure, or in the cloud for example, and you have the dropdown that picks witch chip you're going use, the defaults might be Intel, for example. That is no longer a no-brainer, that you just go with the default. There are lots and lots of options, to the extent that, I mean, Ampere is a new chip company that decided to go into the market. So one of the questions that I have is why? You know, what gap did you see that it was worth coming in to fill?

Because 10 years ago we would've said there was no real gap, wouldn't we?

Sean Varley: That's right. Yeah. Actually it was a much more homogenous ecosystem back in those days. You know, and I, full disclosure, I came from Intel. I did a lot of time there. But about seven years, six years ago, I chose to come to Ampere. and part of this was the evolution of the market, right?

The cloud market came in and changed a lot of different things, because there's kind of classically, especially in server computing, there's sort of the enterprise and the cloud and the cloud of course has had a lot of years to grow now. And the way that the cloud has evolved was to, really kind of, you know, push all of the computing

to the top of its performance, the peak performance that you could get out of it. But there, you know, nobody really paid attention to power. Going back, you know, 10, 15, 20 years, nobody cared. And those were in the early days of Moore's law. And, part of what happened with Moore's Law is as frequencies, you know, grew then so did performance, you know, linearly.

And I think that sort of trained into the industry a lot of complacency. And that complacency then became more ossified into the, you know, the way that people architected and what they paid attention to, metrics that they paid attention to when they built chips. But going back about seven, eight years, we actually saw that there was a major opportunity to get equal or better performance for about half the power. And that's kind of what forms some of our interest in building a company like Ampere. Now, of course, Ampere, since its inception has been about sustainable computing and, me being personally sort of in interested in sustainability and green technology and those sorts of things

just outside of the, my profession, you know, I, was super happy to come to a company like Ampere that had that in its core. 

Anne Currie: And that's very interesting. So really and Ampere, your chip is a, is an X86 chip, so it's not competing against ARM is more competing against Intel and AMD.

Sean Varley: It's actually, it is an ARM chip. It's a, it's based on the ARM instruction set. And, yeah, so it's kind of an interesting dynamic, right? There was, there's been a number of different compute architectures that have been put into the marketplace. and the X86 instruction set classically by Intel and a MD who followed them, have dominated the marketplace, right?

And, well at least they've dominated the server marketplace. Now, ARM has traditionally been in mobile handsets, embedded computing, things like this. But part of where the, that architecture was built and its roots were grown up in more power-conscious markets, you know, because anything running on a battery you want to have be pretty power miserly

Anne Currie: Yeah.

Sean Varley: to use the word. So yeah, the ARM instruction set and the ARM architecture did offer us some opportunities to get a lift when we first, when we were a young company, but it doesn't necessarily have that much of a bearing on overall what we can do for sustainability, because there's many things that we can do for sustainability and the instruction set of the architecture is only one of them.

And it's a much smaller one. I, it is probably way too detailed to get into on this podcast, but it is one factor and so yes, we are ARM instruction set based and about four years back, we actually started creating our own course, on the instruction set. And that's sort of been an evolution for us because we wanted to maintain this focus on sustainability, low power consumption, and of course, along with that, high performance. 

Anne Currie: Oh, that's interesting. So as you say, the instruction set is only one part of what you're attempting, of what you're doing to be more efficient, to be, to use less power to per operation. What else are you, what else are you doing?

Sean Varley: Oh, many things. Yeah. So the part of this that kind gets away from the instruction set is how you architect and how you present the compute to the user, which may get further into kind of some of your background and interest around software because, part of what we've done is architect a chip or a set of family of chips that now that are very, well, they start off with area efficiency in the core.

And how we do a lot of that is we focus, on cache, cache configuration. So we, you, we use a lot more of what we call L2 cache, which is right next to the cores that helps us get performance. We've, kind of steered away from the X86 industry, which is much more of a larger L3 cache, which is a much bigger piece and area, part of the area of the chip.

And so that's one of the things that we've done. We've, but we've also kind of just decided that many of the features of the X86 architecture are not necessary for high performance or efficiency in the cloud. And part of this is because software has evolved. So what are those things? Turbo, for example. Turbo is a feature that kind of moves the frequency of the actual cores around, depending on how much thermal headroom the chip has. And so if you have a small amount of cores, the frequency could be really high. But if you have a lot amount of cores doing things, then you, then it pulls the frequency back down low because you've only got so much thermal budget in the chip. So we got, we said, oh, we're just gonna run all of our cores at the same frequency.

And we've designed ourselves at a point, in the, you know, voltage frequency curve that allows us that thermal headroom. Now, that's just one other concept, but, so many things have really kind of, you know, created this capability for us to focus on performance per watt and all of those things are contributors to how you get more efficient.

Anne Currie: Now that's, that is very interesting. So why, yeah, it's, what was your original motivation? Was it for the cloud? What did you, were you designing with the cloud in mind or were you designing more with the devices in mind?

Sean Varley: Yeah, we absolutely, we're in, 

are, you know, designing for cloud, because, 

cloud is such a big mover in how things evolve, right? I mean, if you're looking at markets, there's always market movers, market makers and the way that you can best accomplish getting something done. So if our goal is to create a more sustainable computing infrastructure, and now in the age of ai, that's even become more important, but, if our goal is that, then we need to go after the influencers, right? The people that will actually, you know, move this, the needle. And so the cloud was really important and we've, had a kind of this, you know, overall focus on that market, but it's not,

our technology is not limited to it. Our technology is, you know, by far and away much more power efficient anywhere from all the way out at the edge and devices and automotive and networks all the way into the cloud. But the cloud also gave us a lot of the paradigms that we have also been attached to.

So when we talk about cloud native computing, we're really kind of hearkening to that software model that was built out of the cloud. The software model built out of the cloud is something that they call serverless, in the older days. Or now it's, you know, microservices and some of these sorts of concepts.

And so as software has grown, so have we, you know, kind of put together a hardware architecture that meets that software where it is, because what that software is about is lots of processes, you know, working together to formulate a big service. And so those little processes are very latency sensitive.

They need to have predictability, and that's what we provide is our architectures, lots of cores that all run at the same kind of pace, and so you get high degree of predictability out of that architecture, which then makes the software and the entire service more efficient.

Anne Currie: So that's, that is very interesting. And I hadn't realized that. So obviously things like serverless going on in clouds, that is a, the software that's actually running on the chip is software that was written by usually the cloud provider. You know, the, clouds wrote that software.

So it, you are isolating from, it is, one of the interesting things about high performance software is that it's hard, really hard to write. In fact, in Building Green Software, I always talk about people about don't start there, it's really hard. You need specialist skills. You need to know the difference between L2 caches and L3 caches.

And you need to know how to use them. And the vast majority of engineers do not have those skills. And it will never achieve, will never acquire those skills. But the cloud providers where they are managing, providing managed services that you are using, like, you're just writing a code snippet that's running in Lambda or whatever. You are not writing the code that makes that snippet run. You're not writing the code that talks to the chip. Really super specialist engineers at AWS or Azure or whatever are writing that code.

So is that the, is that the move that you were anticipating?

Sean Varley: Absolutely. I mean, that's a big part of it, right? And as you just articulated a lot of the platform as a service kind of code, right, so that managed service that's coming out of a hyperscaler is, you know, built to be cloud native. It's built to be very microservice based.

And it has a lot of what we call SLAs in the industry, right? Service level agreements, which mean that you need to have a lot of, different functions complete, on time for the rest of the code to work as it was designed. And as you said, it is a much more complex way to do things, but the overall software industry has started to make it a lot easier to do this, right. And things like containers, you know, which are inherently much more efficient. you know, sort of, you know, entities, yeah, like, footprints, images is what I was really kind of going for there. They're, they are, you know, already you've cut out a lot of the fat, right, in the software. You've gotten down to a function. You mentioned Lambda, for example. A function is the most, you know, sort of nuclear piece of code that you could potentially write, I suppose, to do something. And so all of these functions working together, they need these types of execution architectures to really thrive and yes, you're right, that developers, you know, they have come a long way in having these serviceable components in the industry. You know, Docker sort of changed the world about, what is it, 10 years ago now, maybe longer. And all of a sudden people could go and grab these little units of, what they call endpoints in kind of, you know, kinda software lingo, you know? And so if I wanna get something done, I can go grab this container that will do it. And those containers and the number of containers that you can run on a cloud native architecture like Ampere's is vastly better than what you can find in most X86 architectures.

Why? Because these things run on cores. Right. And we have a lot of them.

Anne Currie: Yeah, so that is very interesting, the, so I also. Everybody who's listening to the podcast must also in like my other book on this very subjects, which is called the Cloud Native Attitude. And it was about why Docker is so important, why containers are so important.

Because they wrapped up, they allowed you to wrap up programs and then move those programs around so that's, it basically put a little handle that made you be able to move stuff around and started and stop it and orchestrate it. And what that meant was

Sean Varley: I love that analogy, by the way, the handle, and you just pick it up and move it anywhere you want it, right.

Anne Currie: Yeah, because really that was what, 

that was all that Docker really did. It wrapped something that was, a fairly standard Linux concept that had been around quite a long time. And it put a nice little API on it, which was effectively a handle, which let other tools move it around.

And then you've got orchestrators like Kubernetes, but you also got lots of other orchestrators too.

But what that meant in the cloud native world was that you could have services that were written by super experts or open source. So it had lots of experts from all over the place, writing them and tuning them and improving them and get, letting Moore's law and write, well, not Moore's Law, Wright's Law, which the law systems get better if you use them. Yet it gave people a chance to go in and improve things. But have those be the people who are improving things, be specialists and let that specialist code was incredibly hard to write, be shared with others. So you're kind of amortizing the incredibly difficult work. So fundamentally, what you are saying, and I think this is, you know, I, you could not be singing more from my hymn sheet on this, is that it's really hard to write code that interfaces well and uses CPUs well so that they're highly efficient and you get code efficiency and you get operational efficiency really hard to do. But, if you can do it, if you can find a way that it doesn't require every single person to write that code, which is really hard, but you can share it and leverage it through open source implementations or cloud implementations written by the cloud providers, then suddenly your CPUs can do all kinds of stuff that they couldn't have done previously.

Is that what you're saying?

Sean Varley: Absolutely, and I would've, I was gonna put tack on one little thing to your line was it's really hard to do this by yourself, right? 

And this is where the open source communities and all of these sorts of things that have really kind of revolutionized, especially the cloud, coming back to that topic that we were talking about.

Because the cloud has really been, I think evolved on the back of open source software, right? And that radically changed how software was written. But now coming back to your package and your handle, you can go get a function that was written in and probably optimize by somebody who spent the time to go look at how it ran in a specific architecture.

And now with things like Docker and GitHub and all these other tool chains where you can go out and grab containers that are already binary compiled for that instruction set that we were talking about earlier, this makes things a lot more accessible to a lot more people. And in some ways, you have to trust that, you know, this code was written to get the most out of this architecture, but sometimes there's labeling, right?

This was written for that, or, you know, a classic example in code is that certain types of algorithms get inline assembly done to make them the most efficient that they can be. And all of that usually was done in the service of performance, right? But one of the cool things about trying to do things in service of performance is that you can actually usually get better power efficiency out of that if you use the right methodologies. Now, if the performance came solely from something that was frequency scaled, that's not gonna be good for power necessarily. But if it's going to be done in what we call a scale out mechanism where you get your performance by scheduling things on, not just one core, but many cores,

and they can all work together in service of that one function, then that can actually create a real opportunity for power efficiency. 

Anne Currie: Yeah, so that maps back to something that in Building Green Software we talk about, which is utilization. So, you know, a machine is. And a machine use needs to be really well utilized because if it's not well utilized, it still uses pretty much the same power, but it's not doing anything if it's not actually doing anything. It's not doing anything useful with it. It's just a waste.

Sean Varley: I'm so glad you brought this up.

Anne Currie: Well go for it. Go for it. You know, you are the expert in this area.

Sean Varley: Oh, no. Yeah, I think you're, exactly right. You hit it on the, the nail on the head, and the part of the problem in the world today is that you have a lot of machines out there that are underutilized, and that low utilization of these machines contributes a lot to power inefficiency. Now I'm gonna come back to some other things that maybe go back to the, where we were talking about in certain terms of processor architecture, but is still super relevant to code and efficiency. So the one thing going back to everybody only had one choice on the menu, which was Intel at the time,

was that architecture instilled some biases or some habits, pick your sort of word here, but, people defaulted to a certain type of behavior. Now, one of the things that it trained into everyone out there in the world, especially code writers and infrastructure managers, was that you didn't ever get over about 50% utilization of the processor because what happened is if you did then at, after 50% all of the SLAs I was talking about earlier, those, that service level agreement where things are behaving nicely, went out the window, right? Nobody could then get predictable performance out of their code because why?

Hyperthreading. So Hyperthreading is where you share a core with two execution threads. That sharing at once you got went over 50%, then all of a sudden you are heavily dependent on the hyperthreading to get any more performance. And what that does is it just messes up all the predictability of the rest of the processes operating on that machine.

So the net result was train people 50% or below. Now our processors, if you're running at 50% or below, that means you're only using half of our complete capacity, right? So we've had to go out and train people, "no, run this thing at 80 or 90% utilization because that's where you hit this sweet spot," right?

That's where you're going to save 30, 40, 50% of the power required to do something because that's how we architected the chip. So these are the kinds of biases and habits and sort of rules of thumb that we all end up having to kind of combat. 

Anne Currie: Yeah, and it's interesting. I mean, that's say as, you say that completely maps back to a world in which we just weren't thinking about power, you know, we just didn't care about the level of waste. So, I, quite often en enterprise, enterprise engineers, architects are very used these days to the idea of lean, and agile.

It's about reduction of waste. And the biggest waste there is, underutilized machines. And we don't tend to think about it. And as you say, in this part, because we were trained now to thinking about it. 

Sean Varley: And also people were, didn't really care there, you know, back in the day, you know, going back again, 10, 15, 20 years ago, people didn't really care that much about how much power was consumed in their computing tasks because it wasn't top of mind for people, right. And frankly, we consumed a lot less of it, primarily because we had a lot of less infrastructure in service in, you know, worldwide I'm talking about, but also because, you know, back in, you know, in older chip architectures and older silicon process technology, it consumed less power. Now as we've gotten into modern process technology, that whole thing has changed. And now you've got chips that can burn hundreds and hundreds of watts by themselves, not to mention the GPUs, which can burn thousands of watts. And that's just a wholesale shift in, you know, kind of the trajectory of power consumption for our industry.

Anne Currie: So you've brought up AI and GPUs there, and obviously, and even more AI focused chips that are even potentially more power hungry. How does Ampere help? 'Cause Ampere is a CPU, not a GPU or a TPU, how does it

fit into this story?

Sean Varley: It fits in a number of different ways. So, maybe a couple of definitions for people. CPU is a general purpose processor, right? If we, it runs everything, and in, you know, kind of everyday parlance, it's an omnivore. It can do a lot of different things and it can, you know, do a lot of theso pretty well, but what you have is an industry that is evolving into more specialized computing. That's what A GPU is. But there are many other examples, accelerators and others types of, you know, kind of, not homogenous type computing, but heterogeneous computing, where you've got different specializations. GPU is just one of those.

And, but in AI, what we've found is, that the GPU architecture, of course, has driven that overall workload, you know, to a point where the power consumption of that type of a workload, because there's a lot of computational horsepower required to do, AI models

and, so that has driven, you know, the industry up into the right in terms of power consumption. And that has, you know, there's a bias now in the industry about, well, if you're gonna do AI, it's gonna just take a ton of power to do it. The answer to that is, "maybe..." right? Because what you've got is, maybe a little bit of a lack of education about the whole pantheon of AI, you know, kind of execution environments and models and things like that, and frameworks and all sorts of things.

All of these things matter because a CPU can do a really good job of doing the inference operation, for AI and it can do an excellent job of doing it efficiently. 'Cause coming back to your utilization, you know, kind of argument we were talking about earlier. Now, in GPUs, the utilization is even far more important because as you said, it sits there and burns a lot of power no matter what.

So if you're not using it, then you definitely don't want that thing just kind of, you know, running the meter. And so utilization has become a huge topic in GpU, you know, kinda circles and so, but CPUs kind of have a ton of technology in them for low power when not utilized.

You know, that's been a famous, you know, kind of set of capabilities. But also AI is not one thing. And so AI is the combination of specialized things that are being run in models and then a lot of generalized stuff that can be run and is run on CPUs. So where we come in, Ampere's concept for all that is what we call AI compute.

So AI compute is the ability to do a lot of the general purpose stuff and quite a bit of that AI specific stuff on CPUs, and you have a much more kind of flexible platform for doing either.

Anne Currie: So it's interesting. Do you, now I'm going show my own ignorance here 'cause I've just thought of this and therefore I'm gonna go horribly roll with it. There are kind of a, there are kind of platforms to help people be more hardware agnostic when it comes to stuff like, Triton, is it, and,

are there things that, do you fit in with anything like that,

or is it just, does everybody have to kind of decipher themselves whether they're gonna be, which bit of hardware they're gonna be using?

Sean Varley: Oh man. We could do a whole podcast on this. Okay.

Yeah. Let me try to like, break this down at least in a couple of simple terms. So, yes, I mean, there's two, first of all, there's two main operations in AI. There's training and there's inference. Now training is very high batch, high consumption, high utilization of a lot of compute.

So we will think of this as maybe racks full of GPUs because it's also high precision and it's a big, it's a kind of a very uniform operation, right? once you set it, you kind of forget it and you let it run for famously weeks or months, right? And it turns out a model, but once the model's turned out, it can be run on a lot of different frameworks.

Right. And so this is where, you know, that platform of choice part comes back in because inference is the operation where you're gonna get some result, some decision, some output out of a model. And that's gonna be the, by far and away the vast majority of AI operations of the future, right?

We've been, 

we're still training a lot of models, don't get me wrong. But in the future is gonna be a lot of inference and that particular operation doesn't require as high a precision. It doesn't require a lot of the same characteristics there that are required in training. Now that can be run a lot of different places on these open source frameworks.

And also what you're starting to see is now specializations in certain model genres. A genre, I would say is like a llama genre, you know, from meta, you know, they've built all of their own, much more efficient, you know, kind of frameworks in their CPP, their C++ implementation of the llama frameworks.

So you got specialization going on there. All that stuff can run on CPUs and GPUs and accelerators and lots of other types of things. Now it becomes more of a choice. What do I want to focus on when I do this AI operation? Do I really want to focus on something that's going to, you know, get me the fastest result, you know, ever?

Or can I maybe let that sort of thing run for a while and then give me results as they come? And a lot of this sort of decision making, use case based decision making will dictate a lot of the power efficiency of the actual AI operation.

Anne Currie: That is interesting. Thank you very much for that. So Ampere, you see, so Ampere is basically in that second thing, you are one of the options for inference.

Sean Varley: That's right, yeah. And we actually, we, our sort of whole thought process around this is, that we want to provide a very utilitarian resource, right? Maybe it's the right word. Because the utilitarianism of it is not that it's like low performance or anything like that, it's still high performance.

It's just that you're not going to necessarily need, you know, all of the resources of the most expensive or the most, kind of, parameter-laden model. So, 'cause models come in, a lot of parameters. We hear this term, right? You know, up to trillions of parameters, down to millions of parameters.

And somewhere in the middle is kind of that sweet spot right now, right? Somewhere in the 10 to 30 million per, or billion, sorry, billion parameter range and that sort of thing requires optimization and distillation. So we are building a resource that will be that sort of utility belt for AI of the future, where you need something that runs, you know, a like a llama 8 billion type of model, which is gonna be a workhorse of a lot of the AI operations that are done in GenAI, for example, that will run really well and it will also run with a lot less power than what it might have been required if you were to run it on a GPU. So there's gonna be a lot of choices in there will need to be, you know, folks that specialize in doing AI for a lot less you know, kind of power and cost.

Anne Currie: Something that Renee mentioned on stage when we were so, the CEO of Ampere and I were on stage at the same, in a panel a few months ago, which is how comes we're talking today, and one of the things she said that very much interested me was that Ampere chips could, didn't have to be water cooled, they could be air cooled. Is that, true? Because obviously that's something that comes up a lot in the water use and AI's terrible water use. What's, the story on that?

Sean Varley: Yes. That is actually one of our design objectives, right? If you put in a design objective, sustainability is one of your design objectives. That is what we do, right? So part of what we've done is we've said, look, our chips run at a certain kind of ceiling from a power perspective, and we can get a lot of performance out of that power envelope.

But that power envelope's gonna stay in the range where you can air cool the chip. This provides a lot of versatility. Because if you're talking about sort of the modern data center dynamic, which is, oh, I've got a lot of Brownfield, you know, older data centers that, now are they gonna become obsolete?

And then in the age of AI, because they can't actually run liquid cooling and stuff like that. No. We have infrastructure that goes into those types of data centers and also will get you a lot of computational horsepower for AI compute inside a power envelope that was more reasonable or already provisioned for that data center.

Right? We're talking about racks that run 15 kilowatts, 22 kilowatts. Somewhere in that 10 to 25 kilowatt range is sort of a sweet spot in those types of data centers. But now what you hear these days is that racks are starting to go to 60 kilowatts, a hundred kilowatts even higher. Recently, you know, Nvidia had been pushing the industry even higher than that.

Those things require a lot of specialization, and one of the specializations that are required is direct liquid cooling, what they call DLC. And that requires a whole different refit for the data center. It's also, of course, the reason why it's there is to dissipate a lot of heat.

Right. And that requires a lot of. Water.

Anne Currie: Which is fascinating because it, the water use implications of AI data centers comes up a lot at the moment and perfectly reasonably so. It is yet, it is not sustainable at the moment to put the, to put data centers in places where, and it's a shame because, places where there is a lot of solar power, for example, there's also often and not a lot of water. Right. Yeah.

If you can turn solar, the sun into air conditioning, that's so much better than taking away all their lovely clean water that they really needed to live on. 

Sean Varley: Yes. 

Anne Currie: So that's, I mean, is that the kind of thing that's, that you are envisaging, that it doesn't have to, you know, it works better in places where there's sunshine.

Sean Varley: Absolutely. And we create technology that can very efficiently implement a lot of these types of AI enabled or traditional, you know, kind of compute. And they could be anywhere. They could be, you know, at an edge data center in a much smaller, you know, environment where there's, you know, only a dozen racks.

But it's also equally comfortable in something where there's thousands of racks, 

because at the end of the day, if you want to be more sustainable, then just use less electricity. That's the whole point, right. 

And you know, we can get into a lot of these other schemes. you know, for trying to offset carbon emissions and all these sorts of things and, all those schemes,

i'm not saying they're, bad or anything like that, but at the end of the day, our whole mission is to just use less power for these types of operations. And it comes back to many of the concepts we've talked about, right? You know, utilize your in infrastructure. Use code efficient, you know, practices, which comes back to like containers and there's even much more refined you know, code practices now for, doing really efficient coding. And then, you know, utilize a power efficient hardware platform, right? Or the most power efficient platform for whatever job you're trying to do. And certain things can be done to advertise, you know, how much, you know, electricity you're consuming to get something done, right? And there's, that's a whole sort of, you know, next generation of code I think is just that power aware, you know, kind of capacity for what you're gonna run at any given moment.

Anne Currie: Well, that's fantastic. I, we've talked for quite a long time and that was very information dense. It was high utilization of time to information there. I think we had a quite a high rate there of information passed. So, is there, so that was incredibly interesting and I really enjoyed it and I hope that, the listeners enjoyed it. All the, if there's anything that we talked about, we'll try and make sure that it's in the show notes below. Make sure that you read Building Green Software and the Cloud Native Attitude, because that would, that's a lot of what we talked about here today. and is there anything else, is there anything you wanna finish with, Sean?

Sean Varley: Well, I just, I really enjoyed our discussion, Anne, thank you very much for having me. I think these technologies that are very important, and these concepts are very important, you know, there's a lot of misinformation out there in the world as we know, it's not just in, not just confined to politics,

Anne Currie: Yep.

Sean Varley: but yeah, there, you know, there's a lot of education I think that needs to go on in these types of environments that will help all of us to create something that is much greener and much more efficient. And by the way, it's good practice because almost every time you do something that's green, you're gonna end up saving money too.

Anne Currie: Oh, absolutely. Yes, totally. If you're not doing it because you're, well, you can do it because you're a good person, which is good,

but also do it 'cause you're a sensible person who doesn't have a

Sean Varley: That's great. Yeah. Successful businesses will be green, shall be green! Let's, there needs be a rule of thumb there.

Anne Currie: Yeah. Yeah. So it is interesting. If you've enjoyed this podcast, listen as well to the podcast that I did with Charles Humble a few weeks ago, that we, again, he touched on, it's an interesting one, is there's a lot of disinformation out there, misinformation out there, but a lot of that is because the situation has changed.

So things that were true 10 years ago are just not true today. So it's not deliberate misinformation, it's just that the situation has changed. You know, the context has changed. So if you, you might hear things and think, "but that didn't used to be true. So it can't be true." You can't make that judgment anymore. You know, it might be true now and it wasn't true then. But yeah, that's the world. We are moving quite quickly.

Sean Varley: Yeah, technology, it moves super fast. 

Anne Currie: Absolutely. I don't, I've been in, so I suspect that you and I have been in for, you know, 30 years past, but it's never moved as fast as it's moving now, is it really? 

Sean Varley: Oh, I agree. Yeah. AI has just put a whole like, you know, afterburner on the whole thing. Yeah.

Anne Currie: Yeah, it's just astonishing. But yeah. Yeah. So the world, yes, all the rules have changed and we need to change with it. So thank you very much indeed. And thank you very much for listening and I hope that you all enjoyed the podcast and I will speak to you again soon. So goodbye from me and goodbye from Sean.

Sean Varley: Thank you very much. Bye-bye.

Anne Currie: Bye-bye. 

Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.

To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode.