Host Anne Currie is joined by the seasoned Chris Liljenstolpe to talk about the latest trends shaping sustainable technology. They dive into the energy demands of AI-driven data centers and ask the big question around nuclear power in green computing. Discussing the trajectory of AI and data center technology, they take a look into the past of another great networking technology, the internet, to gain insights into the future of energy-efficient innovation in the tech industry.
Host Anne Currie is joined by the seasoned Chris Liljenstolpe to talk about the latest trends shaping sustainable technology. They dive into the energy demands of AI-driven data centers and ask the big question around nuclear power in green computing. Discussing the trajectory of AI and data center technology, they take a look into the past of another great networking technology, the internet, to gain insights into the future of energy-efficient innovation in the tech industry.
Learn more about our people:
Find out more about the GSF:
Resources:
Events:
If you enjoyed this episode then please either:
Connect with us on
Twitter,
Github and
LinkedIn!
TRNSCRIPT BELOW:
Christopher Liljenstolpe: The US grid's gonna be capped by 2031. We will be out of power in the United States by 2031. Europe will be out first. So something has to give, we have to become more efficient with the way we utilize these resources, the algorithms we build.
Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.
I'm your host, Chris Adams.
Anne Currie: Hello, and welcome to This Week in Software, where we bring you the latest news and insights from the world of sustainable software. This week I'm your guest host Anne Curry. As you know, I'm quite often your guest host, so you're not hearing the dult tones of the usual host, Chris Adams. today we'll be talking to Chris Liljenstolpe.
Christopher Liljenstolpe, a leading expert in data center architecture and sustainability at Cisco Networks. Christopher is also the father of Project Calico and co-founder of Tigera, and he's a super expert in cloud infrastructure in green computing. But before I introduce him, I'm going to make it clear I've known Chris for years because he, and he's worked very closely with my husband, so we know each other very well.
So that might explain why we seem like we know each other quite well. Who knows. What I do know from Chris is that it's impossible to say what we'll be talking about today. We will go all over the place. But Chris, do you wanna introduce yourself?
Christopher Liljenstolpe: We might even cover the topic at hand, although that is an unlikely outcome. But who knows? That might be a first. That would be a first, but it might be an outcome.
Anne Currie: So introduce yourself. Introduce yourself.
Christopher Liljenstolpe: Sure. So, as Anne said, my name's Christopher Liljenstolpe. I am currently senior director for data Center Architecture, and sustainability here at Cisco, which means, once again, I failed to duck. So I'm the poor sod who's gotten the job of trying to square an interesting circle, which is, how do we build sustainable data centers, and what does a sustainable data center look like?
At the same time, dealing with this oncoming light at the end of the tunnel that is certainly not sunshine and blue birds, but is a locomotive called AI. And it's bringing with it gigawatt data centers. So, you know, put that in perspective. Mintel, two years ago we were talking about a high power data center
might be a 90 kilowatt rack data center, or a 100 kilowatt rack data center, or a 60 kilowatt rack data center. And about two years ago we went to, okay, it might be 150 kilowatt rack data center, and that was up from 30 kilowatts from years ago. Took a very long time to get to 30 kilowatts. That was good. From two years ago to nine months ago.
Nine months ago it went from 150 kilowatts to 250 kilowatts. So it took us decades to get from two kilowatts to 90 kilowatts to 150 kilowatts. And then in a year we went from 150 to 250, maybe 350. Jensen last week just took us to 600 kilowatts a rack. So yeah, that light at the end of the tunnel is not sunshine at the end of the tunnel.
So yeah, how do we do sustainable data centers when you've got racks that need nuclear power plants that need strapped into each and every rack? So, you know, I'm the one who gets to figure out, you know, what does a gigawatt data center look like and how do you make it sustainable? So that's my day job.
And then, and this really becomes a system of systems problem, which is usually what I end up doing throughout most of my career. Put the Lego blocks together, build system of systems, and then figure out what Lego blocks are missing and what we need to build. So, I did that with Anne's husband on a slightly different space, which was how do you build very scalable networks with millions of endpoints for Kubernetes?
And now I'm doing this for data center infrastructure.
Anne Currie: Which at least is absolutely fascinating. So for listeners, a bit background on me. I'm one of the authors of O'Reilly's new book, Building Green Software. I'm also the CEO of a learning and development company Strategically Green with the husband who used to work with Chris. So, in Building Green Software, Chris was a major contributor to the networking chapter.
So if you are interested in some of the background in this, and the networking chapter is very high level, you don't need to know any super amazing stuff about it, it'll ramp you up on the basics of networking. So take a, have a look, have a read of that. If you want a kind of, a little bit of a lightweight background to what we'll be talking about today.
But actually what we're talking about today is not networking. It is, it was a part of, it is obviously at a key part of any data center, but that's not really where your focus is on the moment. It sounds like, more like energy is what you are caring about at the moment with DCs. Is that true or both? It'll always be both, but...
Christopher Liljenstolpe: It is, it's both. Energy starts behaving a bit like networking a bit at this level. And it's getting the energy and getting the energy out as well. The cooling is actually a real interesting part of it, but
we really start thinking about the energy as an energy network. You almost have to, when we start thinking about energy flows this size, and controlling them and managing them.
But, then there's other aspects to this as well. Some of the things that are driving this insane, I'll be right out and say it, this insane per rack density. Why do we need 600 kilowatt racks? Do we need 600 kilowatt racks? But let's assume we do need them. Why do we need them? We need to pack as many GPUs as closely together as possible.
That means that we need, and why do we need to do that? We need to get them as closest together as possible because we want them to be network close for very high speed so that they, we have a very high performance cluster or closely bound cluster so that you get your ChatGPT answers very quickly,
and they don't hallucinate. So that means putting lots of GPUs and a very high bandwidth memory very close to one another. And when you do that in networking, you want that to be in copper and you want that to be a very specific kind of networking that really ends up using a whole lot of energy unless you pack it very closely together.
So that 600 kilowatts is actually the low power variant. If we stretched further out, it would be by another order of magnitude, because we'd have to go into fiber. So we pack it very close. And that means we end up packing a lot of stuff very closely together that drives a lot of power into one rack, and it takes a lot of power to get the heat back out of it again.
So it would be worse if we stretched it further out, but it's a networking, it's partially a networking thing that's driving this, actually. So is there one of the things, levers we can try and pull, is there a better way of doing this networking to cluster these things tighter together? So it always comes back to the network, one way or the other.
Anne Currie: It does indeed always come. So although I live in a networking household, this I'm not so familiar with it, I don't know how this works. Is this that the GPUs have to talk together very fast, so there's almost no transit time elapsed, transit time in messages between the machines.
Is that why the networking is so important?
Christopher Liljenstolpe: You wanna get as many GPUs talking as closely together as possible. More specifically GPUs and their high bandwidth memory. So the HBM stacks, the high bandwidth memory stacks and the GPUs. The minute that you have, the way, and one good question, if this isn't a good architecture or not.
There are basically in a aI infrastructure, there's three networks that tie the infrastructure together. This what's called the scale up Network, which is the very high speed network that stitches, some number of GPUs together, and that's on the order of, today, anywhere from 3.6 terabits per second, upwards to what's coming down the road,
about 10 terabits a second of what's called non-blocking traffic network between the GPUs in a scale up cluster. And that could be anywhere from eight GPUs up to now within the next year or two, 500 and some odd GPUs in that cluster. So in that realm, you could have up to 500 GPUs all talking to each other at 10 terabits a second, or eight terabits a second depending on the GPU manufacturer, et cetera.
And that's the highest performing part of the network. Then those clusters are talking to other GPUs and other clusters at usually around 800 gigabits a second. So that's a huge step down in performance. And then those GPUs are talking to the outside world, all those GPUs are going to the outside world at the servers, those things are in the server.
Then usually those are packaged for eight GPUs in the server. Those servers driving to the outside world at 800 gigabits a second per server. And that's how they get their data. That's how they get their requests and how they give their answers. so 800 gigabits a second.
Anne Currie: I'm gonna stop now and ask a stupid question, which, say a very simple question. So stepping back, a network, and I'm not a net network expert, so I might be able to say something totally stupid here. So, networks, there are two, at least two very important things about networks.
One is the bandwidth. The bandwidth is how much enormous, how much data can you get down the pipes from one place to another? And the other is latency. How long does it take to do it? So I think what you are saying there, if I understand it correctly, is AI really needs high bandwidth.
And that's what's driving it. It's not latency, it's bandwidth.
Christopher Liljenstolpe: It's, yeah, no, you are correct. And people get that wrong. Because there's such high bandwidth, the latency doesn't matter as much, head end latency, because the amount of data being moved is big and the bandwidth is high. There is a little bit of a latency hit, but high performance computing is more latency sensitive.
If you've got a very high bandwidth network, the data packets are actually pretty small. So the latency isn't as big a hit. The third is congestion. Congestion kills an AI network. And this is the problem. So if I can take the whole model that I'm computing against and put it in that scale up domain,
then everything can talk to everything at full bandwidth and there's no congestion. But if you remember those GPUs that are in the high bandwidth domain, there's eight today, or maybe 72 or 36 or 256 or maybe 500 and some odd if Jenssen's build is correct and some of the other things we're working on with some other vendors might be correct.
So that's a lot of bandwidth. If you can't fit it all in that one, then they have to go over that slower 800 gig per GPU versus 10 terabits per GPU to talk to A GPU in another one of those high bandwidth clusters. And all of a sudden you go from 10 terabits or eight terabits, or three terabits even, to 800 gigabits.
So that's all of a sudden a much more contended or congested network. So you go from running down a, you know, a motorway at two o'clock in the morning to a bmo, a b, you know, side road, with lots of people on it. And the GPUs do this.
Anne Currie: Oh yeah.
Christopher Liljenstolpe: And everything slows to a crawl. And all the GPUs go to massive, basically idle.
And that's what people don't want. 'Cause those GPUs are very expensive. There's hundreds of those GPU servers are hundreds of thousands of dollars. They use a lot of power and they're just idling waiting for the GPU on the other side of that slow link to get back with an answer. So you don't want your, model or that you're inferring against or your training to be split across these things.
So you want everything on that high speed link. And if you want everything on that very high speed link, that multiple terabits per second per GPU, and to think about this, that means that if I've got eight GPUs in a server, that means I've got 80 terabits of bandwidth coming into that server. And if I've got 10 servers, let's say, in that cluster, that means I've got 80 terabits of bandwidth between that server and every other server in that cluster.
And you do the math, that's about 10,000 cables running up and down inside that rack. So the cabling becomes interesting. There's all sorts of interesting problems here. so I cram everything in. So this is why I wanna get everything crammed in as tightly as possible so I can get as many things into that rack, it's an easier problem.
And the power to put that on copper that runs maybe one meter in length or a meter and a half is less than a wat per cable. Per what's called cerdes. Put it on fiber, I'm over a watt, at least, maybe over a couple of watts. So I go from a 10th of a watt to a couple of watts and it takes more space on the board and everything else so that we get into physics problems.
That's why I need to pack it in tight. That's why I need more power in a higher density space, 'cause I wanna get everything into that one high bandwidth domain. Now, another practice might be to do away with this concept of scale out and scale up, and there's some architectures that might do that.
But the main model today, the NVIDIA model is scale up and scale out are kept separate. One can argue is that a good model? It is the model in the industry today.
That means the software developers have to be cogent of that as well. And the scheduler, people who design the schedulers have to be cogent of that as well.
And so this is a design that now ripples through the entire architecture all the way up through the software stack and everything else.
Anne Currie: So what you're saying is that we, when we talk about AI and we talk about GPUs and all that kind of stuff, and the incredible amount of power that it requires, we tend not to think about the fact that actually it's the networking that requires one hell of a lot of that power. It's, this is not networking going, you know, across the country.
There's not networking outside the data centers. This is networking inside them.
Christopher Liljenstolpe: This is networking the rack.
Anne Currie: within,
Christopher Liljenstolpe: This is a one meter diameter, two meter diameter network and it's tens of thousands of cables.
Anne Currie: So I'm sure that something you've been thinking about a lot recently is the enormous shift that's taken place with DeepSeek coming in. Has that completely, have you got, how much of an effect does that have on the network side of things?
Christopher Liljenstolpe: So the whole idea behind DeepSeek is you don't need to do, from a training perspective, I think of it as the data sort pre-trained. So you don't need to do as much pre-training. You don't need to do as much training, therefore you don't need as many GPUs to sort of prep your data, prep your model.
So that means you don't need as big a scale up cluster to train to get ready to infer. And remember, training doesn't make you any money. If you're in this to make money, training doesn't make you any money. It's inference. It's using the, you know, using the model is what makes the money.
And potentially inference as well might be impacted. But Jensen made an interesting point was, as we start doing reasoned inference, that's gonna require a lot more compute. Now it starts looking more like inference, like training, and you're gonna make, up until recently, inference was always one and done.
You make one pass through inference and you get the result. That's why we used to get some interesting, let's just call them interesting results. We used to call it, you know, hallucinations. But now you take and you make one pass through and then you sort of check it. Does it make sense and do you reason?
Does it look reasonable? And you make another pass through again, another pass through again, and a pass through again, this reasoned inference. That all of a sudden starts using a lot more compute. Looks a little bit more like a training job almost. And that now starts using a lot more GPUs and you need more scale up bandwidth in GPUs.
So it'll be interesting to see if DeepSeek benefits, should benefit that reasoned inference as well. The bigger question is, DeepSeek probably only be as good as the pre-trained data they ingest, right? So this sort of becomes, you know, do we feed our AIs with other AI data? And at some point, do we all become self referenced, right?
Do we take AI data to feed other AI data? And pretty soon we're all, you know, it is like if all the code in GitHub is written by AIs, and then we use, we train coding models for GitHub using AI written code. Is that a good thing or not a good thing?
Anne Currie: If it's tested code. I mean, if they also write tests and they run the tests and the code works, then, but...
Christopher Liljenstolpe: Yeah. Yeah. Of course, it's sort of like having the developer write their code too, right? You up with a monoculture.
Anne Currie: Yeah, that is true.
Christopher Liljenstolpe: You end up with a monoculture.
Anne Currie: Yeah, it, yeah,
Christopher Liljenstolpe: Or not. Or not, maybe you don't end up with a monoculture. I don't know. This is, now we're getting into philosophy.
Anne Currie: So it's interesting. I, I do know,
Christopher Liljenstolpe: And now everyone just watched this went from infrastructure to software design to philosophy, and just went.
Anne Currie: You know, it's, I, the AI stuff, I do find quite fascinating. I do know somebody who's a Deep Mind engineer and used to work on OpenAI, and I remember them telling me years ago, years and years ago that the big, the massive change, the switch from, you know, it was kind of when AI was starting to get good, I was talking to her nearly 10 years ago.
I was like, suddenly it's got a lot better. Why has it got a lot better? And he said it's randomness. It's, we realized that actually if you injected a load of randomness into, a load more randomness into its decision making, suddenly got vastly better. It was a sea change. So it's not as predictable.
And it's, it, you know, it is odd that AI, something we don't talk about a lot is that AI is based, at its heart, on the injection of randomness, which I find fascinating. And then, yeah.
Christopher Liljenstolpe: There was, an interesting study. If you train AI on bad data in one domain, it will start giving you, bad results off of other domains as well.
Anne Currie: That's interesting.
Christopher Liljenstolpe: Which was a really sort of, but anyway, yeah, now we're really off the rail.
Anne Currie: But yeah, we are, and in fact we've only got 10 minutes left, so we should actually go back onto sustainability. 'Cause the question I wanted to ask you, you mentioned in our bit that we were talking about there, about racks, that, you know, racks are becoming, you know, you needed a nuclear power station for every rack these days.
But is that literally the case? Can this only be done through nuclear or can it be done like Texas are making out, are making calls for large, flexible loads for all mega amounts of solar that they're running at. Is it realistic? What do you think, is nuclear and AI, is it a prerequisite?
Christopher Liljenstolpe: It is not a prerequisite, but it is probably gonna be a base load demand. And that's because the amount, at least at this point, the amount of money you will invest if you're gonna put up anything a hundred megawatts or more of AI compute, that is a serious amount of investment. And let's also be honest, if you're talking about 500 megawatts or a gigawatt facility, you're also,
you're not lifting a substation permit, 'cause there aren't substations for things like that, you are going to jack yourself into a power plant. Because at that point, you know, a gigawatt is a power generation station, right? That is a reactor in a nuclear power station that. Is a, you know, a gas
generator, a gas turbine in a, you know, a co-generation power plant, et cetera. It's a turbine in a major hydro, right? It is a full scale commercial power plant unit. So there's no reason to have a substation because you are consuming a full commercial power plant. So you might as well plant it there. That's not small money. You are gonna have to guarantee a load to a power company to do that. One. Two, the amount you're gonna spend on the GPUs, let alone all the other infrastructure that goes around it, that is a huge capital investment. You are not gonna want that sitting idle for one minute in a year. So that is going to be a base load that will always, your shareholders are gonna string you up, that will always be running, so that's gonna be a base load. So something's gonna have to support that base load. It could be solar, but then you're gonna have to have a very big battery plant. There's one going in, in India.
There's a one gigawatt facility going in for AI, and it's fully built out. It's gonna be held up by a solar plant. That solar plant is gonna be, one third of the ground is going to be solar, and the remainder is gonna be battery to hold the thing for 24x7 so they will be doing solar, but it's going to be solar battery.
But yeah, this will be, you're gonna want this thing running all the time. So we joke about it being nuclear. The funny thing was three years ago we were saying these small modular reactors, a hundred megawatts, that's a perfect size for a data hall. Now we're just saying, you know, go, you know, unshutter your commercial nuclear reactors because the gigawatt size commercial nuclear reactors by now are about the right size, the interesting part to that is, what do you do when you have to refuel the reactor? Because the reactors, most commercial reactors have to be shut down when you refuel. If you're jacked into a reactor, you're, what do you do when they have to shut down the reactor? That's a year process.
What do you do for power? 'Cause you're probably not connected to the grid. You're connected to, like what they did in Pennsylvania. You're connected to the reactor. What do you do for power when they shut down that reactor? I hope the folks have thought about that. Maybe you still do small, like small modulars.
Maybe you do 12 small modulars at a hundred megawatts each, and you sort of have an n+2. Interesting thoughts.
Anne Currie: Well, that is a very interesting thought. So yeah, so you're making two fascinating points there that I have never heard made. One is that we are totally over, we've totally run ahead of SMRs, you know, all that thing we're talking. Totally. We've galloped ahead of that and yet it might actually be worth bringing them back just because of that kind of modern resilience thing of it's better to have 10 than one. You know better to have 10 small ones than one big one.
Christopher Liljenstolpe: Yeah, I've got resilient reactors, and if it's molten salts, you can refuel them by just, topping off the salt tanks as you go. And you can remove the poison out of 'em as you go. So, you know, just, back the salt truck up and dump more salt in. It's a little more than that, but yeah, sort of.
Anne Currie: Yeah.
Christopher Liljenstolpe: If you're interested in bashing your head into the wall and learning about things that you never thought you'd have to learn about, this is a fun time to get into data center infrastructure because you get to do things like, okay, how do I cram a couple hundred terabits per second into a network in a rack? At the same time,
talk about liquid molten salt reactors. I mean, you know, it's sort of a broad spectrum of, you know, and oh, and let's also talk about signal integrity of dielectric fluids. 'Cause we might have to send all this stuff swimming in a tank. It's, you know, you have a lot of interesting conversations in one day.
Anne Currie: It sounds like you're in a pretty fun area at the moment and we thought it was fun. We thought network cloud networking was fun five years ago. That was nothing as it turns out.
Christopher Liljenstolpe: Yeah, so, and one thing that's sort of interesting now is we took Scalable Sustainable Infrastructure Alliance in the Linux Foundation. We've merged it, as I'm sure you've heard, with Green Software Foundation,
which, so we thought it was probably time to get the hardware guys and the software guys talking, and gals talking together because we realized that we really needed to have these, the stack not have this wall between the hardware and the software.
We really needed to have the same things we were talking about before I alluded to. It's like, okay, the hardware impacts of the horror show that we've got going on. I say that in the nicest possible way to my friends doing the chips, the unique challenges that we have coming, we really need better understanding on the scheduler sides, et cetera, and how we manage that and monitor that and the impacts of that on the software side.
So we decided to take the folks who are working on open hardware designs and making those sustainable, and marrying that to the software side and the green software folks who are working on how we manage and monitor that as well. So we decided to take those two and put them together. And the first project out of that is gonna be something called Project Mycelium, which is going to be actually looking at, how we build software linkages on, how you manage and monitor the hardware infrastructure on the software side.
Anne Currie: Named after the networks of fungus under the, the way that actually, everything in a forest is more, more connected together than we'd ever realized previously using these incredible mycelium connections, I take it. I'm guessing that's why it's named that way.
Christopher Liljenstolpe: Exactly. Exactly. And a good friend of mine, who used to be the CTO, field CTO at Equinix, is gonna be running that project for me there.
Anne Currie: So, yeah. Utterly fascinating stuff. So yes, I mean, so take, so stepping back from all of this, it's mind blowing amount of new, of complex new thoughts and approaches to things. And what's your view? I mean, you, have a kind of. 30,000, 40,000 foot of view, tend to, on all of these things.
What are you thinking? Where's it all going? What's it gonna, what's gonna happen?
Christopher Liljenstolpe: Well, one of my jokes is yes. AI will kill us all. The question is, will it get smart enough and realize we're the problem and actively kill us, or will it just take so much resources, it will just melt all the ice caps and create a water world before it becomes sentient and just kill us that way? that's it.
There's a joke in every joke. I think right now the path that we're on frankly is not sustainable. You know, we can't, you know, the next logical step from this is we're looking, you know, if we follow that train of 150 know, 60, 100 152, 5600, it's north of a megawatt a rack. That path is unsustainable both from, resources, power, but also economics.
It just, we can't do that. At the going rate, the US grid's gonna be capped by 2031. We will be out of power in the United States by 2031. Europe will be out first. so yeah. So something has to give, we have to become more efficient with the way we utilize these resources, the algorithms we build. We're still brute forcing AI.
We think this is all brilliant software. It's not, we're still brute forcing the heck out of this stuff.
So something's gotta give there. I think when that does, there'll be a lot of business models that might face some challenges. Because there's a lot of value built that this is going to continue going this way.
But it needs to happen. So we're gonna end up, I think, and there's a lot of fluff as well. There's a lot of pet, the equivalent of pets.com, out there right now. I think we'll end up with a lot more distributed use cases for AI that don't need the same amount of power. We don't need huge inference across it.
But yeah, the, current trend will have to get adjusted, and somebody's gonna figure it out.
Anne Currie: The old phrase.
Christopher Liljenstolpe: People try it out.
Anne Currie: If something can't go on, it won't, it'll stop, you know?
know
Christopher Liljenstolpe: There will be enough economic pressure that it will drive an innovation that will fix it. So I mean, just you looking at it, just
Anne Currie: Yeah, it's the code.
Christopher Liljenstolpe: I'm not sure how we'll mine enough copper to support this building the power transmission infrastructure. So anyway, that's my doom and gloom part of this.
But I think, it's, what we will end up by the time we're done with it though, is a very efficient computational infrastructure, is it's forcing us to look at everything along the stack. Air is an absolutely horrible heat transfer fluid. We are, everyone's running madly down the road of liquid.
Everyone's running madly down the road of higher voltage. Which again, the way we transmit power in a data center is pretty horrible today. Everyone's ringing all the efficiencies they can outta that because now we have to, it's just economically impossible to do it any other way. So whatever comes outta the back of this is we are gonna have a very efficient data center infrastructure.
Which is all for the better. We're probably gonna end up with a, we will probably end up driving, this will probably fix the grids, because it has to, because we're driving a very different power transmission infrastructure. So we'll fix a bunch of problems along the way. Silver lining.
Anne Currie: And there is a lot of money behind it. So it's, yeah, it is actually aligned with a lot of good things that we want and it's driving a lot of money in those directions. Yeah. It's interesting. If it doesn't kill us all, which, you know,
Christopher Liljenstolpe: Yeah, and who knows? It'll probably, it'll probably bring back nuclear, we'll probably, have, be able to have rational conversations about other non-carbon emitting power sources that,
Anne Currie: Space-based solar power. Well, I'm desperate for it.
Christopher Liljenstolpe: Maybe, yeah, maybe. Might get some countries that just recently shuttered all their nuclear plants go back and put their cooling towers back up.
Not talking about any European countries.
Anne Currie: Well, I'm sure everybody's brain is completely full now, so, and we've had a really interesting discussion that I have utterly enjoyed. So I think we should probably draw the podcast to an end with any final comments that anybody wants to make. So everything we, well, everything we talked about that we can put in the show notes, will be in the show notes at the bottom of the episode.
Do you have any final points that you want to make?
Christopher Liljenstolpe: I mean, it is fun times. And it's not all doom and gloom, but you know, it is right now, there is a bit of a hike and you know, it,
at this point it seems like it is a train that's gonna keep on going and it will correct. But it is leading to a lot of innovation and that innovation will hang around. Just like when the dot-com bust happened, we will see a correction here and what people thought originally the internet was going to do and what was gonna be delivered by the internet didn't really happen. But it certainly, the things that it is used for, people never, even the people who originally created the ARPANET or the people who invested in the dot-com original late nineties explosion,
what they, the money they put into it, they had, they did not foresee what it is being used for now. But we, the world has been, you know, forever changed by that for good and ill both, by that investment and it's gonna be the same thing here. What we're investing in building now, we think we know what it's gonna be used for,
we're wrong. Everything we think it's gonna be used for,
5% of it will probably be still what it's being used for 15 years from now. The rest of it, we have no idea.
And we'll benefit from it and we'll suffer for it. But, we're building a base infrastructure and other people will build, will actually build on that base infrastructure and deliver things that we will have no idea about.
Anne Currie: Yeah, I, that reminds me of sort of a discussion that we had a few years back about why the internet survived 2020, the beginning of the pandemic, which kept up the west. 'Cause otherwise nobody would, if we hadn't been able to all stay at home and work over, video conferencing, things like that.
And a lot of the infrastructure that was put in place that we relied on there was to support high definition stream tv. So it was like game people put it in so the folk could watch Game of Thrones, then Game of Thrones saved the West. It's like, who would've predicted that? You just don't know what's gonna happen.
Christopher Liljenstolpe: Exactly. Yep. Indeed. And that infrastructure actually, which we didn't talk about, was put in place because, service providers made a horrible choice early on of putting in broadband that was the cheap choice that couldn't do multicast. If they had put in multicast capable infrastructure, they wouldn't have put in the amount of backbone infrastructure that they did. Because they would've had multicast and they wouldn't have had to do the build, that they did, which indeed actually helped us. So it was, you know, that not having multicast out there actually probably saved our bacon. And it pains me to no end. because I was sitting there banging away in the mid nineties, et cetera, as like, "we need to get multicast out there. It's so much more efficient. It will save so much money." And if we had, we probably would've been in much worse shape when the pandemic hit.
Anne Currie: It is interesting that flabbiness, things like inefficient code and inefficient code is what we've been building for the past 20 years. Most of my career, we've been building highly inefficient code, but it does mean there's a lot of untapped potential in there to improve, you know, it's,
Christopher Liljenstolpe: True.
Anne Currie: Unrealized potential as a result of lazy behavior in the past. We are mining our own past laziness that might save us all.
Christopher Liljenstolpe: Indeed.
Anne Currie: On that note, our laziness and lack of foresight in the past have tended to save us in the future. It might well save us again. On that happy note or that nuanced note,
thank you very much for listening and thank you very much for being my excellent guest today, Chris.
Christopher Liljenstolpe: Thank you for having me on, Anne, and thank you everyone for listening. I hope it was, if not educational, at least entertaining.
Anne Currie: I'm sure it was both. Thank you very much and speak to you on the next time I'm hosting the Environment Variables podcast. Goodbye.
Christopher Liljenstolpe: Bye everyone.
Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners.
Chris Adams: To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode.