Host Chris Adams and guest Romain Jacob delve into the often-overlooked energy demands of networking infrastructure to discover A Greener Internet that Sleeps More. While AI and data centers usually dominate the conversation, networking still consumes significant power, comparable to the energy usage of entire countries. They discuss innovative practices to make the internet greener, such as putting networks to sleep during low usage periods and extending the life of hardware. Romain talks about his recent Hypnos paper, which won Best Paper at HotCarbon 2024. He shares his team’s award-winning research on how energy demand for networking kit powering the internet can be reduced by simply by powering down links when not in use.
Host Chris Adams and guest Romain Jacob delve into the often-overlooked energy demands of networking infrastructure to discover A Greener Internet that Sleeps More. While AI and data centers usually dominate the conversation, networking still consumes significant power, comparable to the energy usage of entire countries. They discuss innovative practices to make the internet greener, such as putting networks to sleep during low usage periods and extending the life of hardware. Romain talks about his recent Hypnos paper, which won Best Paper at HotCarbon 2024. He shares his team’s award-winning research on how energy demand for networking kit powering the internet can be reduced by simply by powering down links when not in use.
Learn more about our people:
Find out more about the GSF:
Resources:
Other source material:
If you enjoyed this episode then please either:
TRANSCRIPT BELOW:
Romain Jacob: We used to consider that energy is cheap. Energy is there. We don't need to worry too much about it. So it's just simpler to plug the thing in, assume energy is there. You can draw power as much as you want, whenever you want, for as much as you want. And it's time to get away from that.
Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software.
I'm your host, Chris Adams.
Hello, and welcome to another edition of Environment Variables, where we bring you the latest news and updates from the world of sustainable software development. I'm your host, Chris Adams. Back in episode 10 of this podcast in September 2022, we did a deep dive into the subject of green networking, because while a lot of the time people talk about the energy demands of AI and data centers, in 2024, in absolute terms, the amount of power consumed by networking was still larger.
Back then in 2022, the best figures, when we looked at this, came from the AI, which put the energy usage of data networks at around 250 terawatt hours per year. So that's about the same as all of Spain's energy usage in 2023, so that's not nothing. Now, it's a few years later, 2024, and the best figures from the same agency, the IEA, now give us a range of between 260 and 360 terawatt hours, which could be anything up to a jump of 50 percent in three years now.
Now because of much of this power is coming from fossil fuels, this is a real problem, climate wise. So what can we do about this? With me to explore this once again, is my friend Romain Jacob, who helped guide us through the subject in 2022, along with Dr. Yves Schouler at Intel at the time.
His team's recent research won the Best Paper Award at HotCarbon, the conference that has fast become a fixture on the green IT and digital sustainability circuit. So he seemed a good person to ask about this. Romain, thank you so much for joining me for this podcast. Can I give you the floor to introduce yourself before we revisit the world of green networks?
Romain Jacob: Chris, welcome. I'm very happy to be back on the podcast to talk a little bit more about this. Hello, I'm Romain, I'm a postdoctoral researcher at ETH Zurich in Switzerland. I've been working in sustainability for two to three years now, more or less full time, as much as full time research happens in academia. And yeah, I was, I had the pleasure to present some of our technical work at HotCarbon and I'm sure we're going to deep dive into a bit more in the podcast.
Chris Adams: Okay. Thank you, Romain. And for people who are new to this podcast, my name is Chris Adams. I am the executive director of the Green Web Foundation. We're a Dutch nonprofit focused on reaching a fossil free internet by 2030. I also work as one of the policy chairs inside the larger Green Software Foundation, which is why I'm on this podcast.
Alright, if you're also new to this podcast, every single project and paper that we mention today we'll be posting a link to in the show notes. So for people who are on a quest to learn more about reducing the environmental impact of software engineering, you can use these for your own practices and your own research.
Okay, Romain, are you sitting comfortably?
Romain Jacob: I am.
Chris Adams: Okay, great. Then I'll begin. Okay, Romain, when we last spoke, we covered a range of approaches that people are using right now to rein in the environmental impact of networking. And but before we spend too much time, I wanted to kind of see if you could help set the scene to help folks develop a mental model for thinking about, say, networking versus data centers, because I touched on this a little bit, but it might not be obvious to most people.
So maybe if you could just provide the high level, then we can touch on some of these Differences between the two and why you might care, or how you might think about these differently. So yeah, let's go from there.
Romain Jacob: Yes. Sounds good. It's true, you mentioned in your introduction that we sometimes get numbers that oppose or compare data center corporate footprint or usage and the ones from networks. But now, what does network really mean? It's not really clear what we mean by that. Because there is networking in data centers and, you know, the rest of networks are not completely detached from it.
But at a high level, you have a set of networks that are meant to provide internet access to individuals and to other networks. What this means is that you have companies that are specialized to just make your laptop, your phone, or other appliances you have, being able to talk through the internet. And typically when we refer to networks, without further details, this is the type of network you're talking about.
And data centers, on the other hand, are something that is in the scale of IT fairly recent, where we have this idea of if we centralize in, in one physical location, a lot of powerful resource machines that have a lot of compute, that have a lot of storage available, then we can use that as a remote computer.
And just offload tasks to those data centers and just only get the results back. In today's ecosystem, data centers are a very core element of the internet. The internet today would not really work without data centers. Or at least a lot of the applications we use over the internet only work thanks to data centers.
So from a networking perspective, in a data center, you also have, you need to exchange information and bits and packets between those different machines that live inside the data center, but the way the network looks like is very different from the cellular network that is providing mobile connectivity to your smartphone.
Chris Adams: Ah, I see.
Romain Jacob: So those are very different type of networks, and they have different, very different type of way they are designed, way they are operated, and how they are used. Typically, in a data center, you tend to have a quite high usage of the data center network because you have a lot of exchange and interaction between the different machines that live inside the data center, whereas in the networks that provide internet access, so the networks that are managed by entities we call ISP, for Internet Service Providers.
Chris Adams: Mm hmm.
Romain Jacob: Tends to be much less utilized. There is a lot more capacity. in those network that what is really demanded by the end user.
Chris Adams: Okay.
Romain Jacob: That's a fundamental difference between, between the two networks and something that we try to leverage in our research.
Chris Adams: Okay, so if I just check if I understand this, so you said that you might have networks which might be like, say the ISPs and things, they are individually not that high themselves, but because there's so many of them and because they're so diffuse, in aggregate, this can work out to be a very large figure, for example, and, and, that also speaks a little bit about, I guess, how you might power some of these.
So, like, when we think about a large hyperscale data center, that's something in the region of maybe, if you're looking at a large one, which is maybe the high tens to maybe low hundreds of megawatts, that's maybe thousands of homes. That's a lot of power in one place, whereas with a network, you don't have quite so much, but it's because it's distributed, you might have to have different approaches to managing that.
So, for example, you might and, you might have to take different strategies to either decarbonize that or deal with some of that, some of that load that you actually have. Okay, and I guess one of the questions I might have to ask you about this is then, when you have this, split between the maybe it's just worth talking a little bit about the different kinds of networks that you have here.
So for example, as I understand it, there's maybe an ISP I connect to, maybe my ISP, but then they need to connect to some other cable. And if I'm going across, maybe if I'm connecting to a server across in the Atlantic, then I'm going through some like backbone or something like that, maybe you could talk a little bit about the different layers there, and what some of the, and what the kind of, how much they might make, or if there's any differences in how those ones need to be powered, for example, how they're used?
Romain Jacob: Yeah, totally. So the, the name internet stands for an interconnection of networks. So the internet by name and in private space is a network of networks. Right? So when you connect to the internet. What it means is that you connect to another machine somewhere on the planet that has also access to the same global network.
But these global networks, the connection between those two endpoints has to go through, most of the time, several different networks. And so, typically, if we look at the internet infrastructure, there are several ways of representing this, but one division that we usually use is You have Core, IAP, so the one that kind of sit more in the middle and they provide transit for many, many different interconnections.
Then you have networks that are qualified to be more belonging to the metro area. So this is where it's getting closer to the user, but it's not yet the network that provides direct connectivity to, let's say, your phone or your laptop. And then you have the edge network. And the edge network is really there to provide what is called the last mile connectivity key to the end user.
And those categories exist and were proposed to Helsinki because this share of the network has different characteristics. The core tends to look a bit more like the data center. Like, it's more dense mesh, so there are more interconnections between the different points in that network. And the utilization tends to be higher and kind of constant because it's a global network.
Whereas, the closer you go to the end user, the more you're going to see filtration, because, for example, while users are awake during certain hours during the day, and this is where they tend to use their machine. You will see peak of usage during, you know, TV show primetimes in the evening, but much less at 5 a.m., where most people are deep and not using their phone or their laptops. And so, those networks look differently in terms of What they are used for and how they are built and designed, because we try to adapt the design for the particular use case.
Chris Adams: Okay, thank you for that. So it's a little bit, if you squint, it's a little bit like how you might have motorways and then A roads and B roads and maybe smaller roads, for example.
Romain Jacob: Very like that. It's very much like that.
Chris Adams: All right. Okay. So that's actually quite helpful. And when I think about other kind of systems, I think a little bit about say, like electricity networks, which have, you know, big fat transmission ones which send lots of things, but then you have like the smaller distribution networks which are, so it's somewhat comparable. Okay.
Romain Jacob: It very similar in principle.
Chris Adams: Okay, so this is helpful for developing a mental model about some of this. Alright, okay, so last episode when we spoke about the different techniques people spoke about, you, we spoke about things like carbon away networking, different protocol designs like I think SCION, which was one of the, projects proposed.
And you kind of coined this phrase that an internet of the future needs to grow old and sleep more. And I really, I found this kind of quite entertaining and it stuck with me. But for people who are new to this, maybe you could just unpack what you meant by that because not everyone has read the paper or seen the talk. And I think it's quite helpful for thinking about this subject in general, actually.
Romain Jacob: Yeah, sure. So, so two years ago in the first edition of the HotCarbon Workshop, we, we outlined this vision of what could be relevant to work on in sustainable networking area. And the two ideas that emerged were essentially captured by this growing old and fleeting more aspect. So what do we mean by that?
Growing old is essentially the idea that we tend to be using the hardware we buy not long enough. So if we take an end user perspective, we tend to change phones every couple years. Numbers are changing about this, but we can debate whether this is a good thing or a bad thing. In the networking area, so for the hardware that operators buy to make up the network, so devices that we call routers and switches, it tends to be a bit of the same thing.
Devices were changed, the standard used to be every three years. So in three years, the entire infrastructure would be renewed. So you would buy new hardware to get higher speed or better energy and so on. And there are various reasons for doing that. We can detail it if you're interested afterwards. But it has a very significant cost, financial cost, but also in terms of carbon cost.
Because one people need to understand is that every time you manufacture a product. Not just for networking, but for any product, there is a carbon footprint associated to it. This is where we typically refer to the embodied carbon footprint. And so this embodied carbon is a one time pay, but if you buy more often, well, you pay this price more often.
Now, there's a bit of a tricky thing, which is that you, you can argue that If I buy a new device, a new phone or a new router that is 10 times more energy efficient, then over time, I will then do more saving that would compensate for the embodied cost. The problem is that you, it's very hard to estimate how much you would save and how much is the embodied footprint.
It's a very firmware, a break even point where you're doing this upgrade, buying this new hardware, start paying off from a carbon perspective, but it's not necessarily clear ahead of time when that happened. Generally speaking though, what was pretty clear to us is that we could and we probably should be using the hardware longer.
And this is what we meant by the grow old idea.
Chris Adams: Ah, okay. Thanks for that.
Romain Jacob: Now, to describe more, this is kind of simpler. This is the idea that is very common in other fields like what we know as embedded systems or the Internet of Things. Think about devices that run on battery, to make things simple, that are more or less small but run on battery. And because they need to run on battery, for decades, engineers have been trying to optimize the energy efficiency of those devices.
And the most efficient way of doing this is essentially turning off everything you don't need when you don't need it. So if you think about your phone, your, the screen of your phone is off, I don't know, maybe 90 percent of the time. And this is to save the power drawn by your screen, which is by far the most expensive or power hungry element in a smartphone.
And we do this in order to save on power, on the average power and so on energy at the end of the day. And we are arguing that in the networking world, this is not done too much. And it should probably be done more. So now I need to be quite precise here when I talk about the networking world, I'm talking about the wired networking domain.
In the mobile domain, so in cellular communication that connect to your phone, or also in Wi Fi and so on, the idea of sleeping is already used quite a lot. But in the wired domain, it did not transfer too much. And so the reason why it did not transfer is because we used to come to the idea that energy is cheap. Energy is there. We don't need to worry too much about it. So, it's just simpler to plug the thing in, assume energy is there. You can draw power as much as you want, whenever you want, for as much as you want. And it's time to get away from that.
Chris Adams: Okay, alright, so if I just play that back to you to make sure I understand it correctly. So, the growing old part is essentially a reference to the embodied energy that goes into making various kinds of hardware. So, like, when we're looking at a laptop, around 80 percent of the carbon footprint, it comes from the manufacturing, compared to the running of this.
And, if I keep that laptop for a short period of time, It's a great proportion of the life cycle, lifetime emissions, for example. And we see the same thing in data centers as well. So, for example, Facebook and company, you know, some companies and hyperscalers, they might have had this three year period that you spoke about before, but in the 2010s, we saw figures anecdotally, but not published ever.
It was like 20 years ago. But sometimes these would go down to as much as as little as 18 months for some service because they wanted to get the maximum usage of kind of compute for the power they're using for example. So they had incentives to change like that. So that's what that part is a reference to.
And I think on the eImpact mailing list, where I've seen a lot of the discussion. I will share a link to this in the show notes of this really cool 3D chart showing how, where the break even points are that you mentioned about that. And the sleeping part seems to be this reference that, in many ways, networks are often designed for kind of maximum amount of usage, not necessarily what the average usage might be, similar to how, say, the electricity grid in America, for example, is designed currently designed for everyone to be using aircon at the same time, when normally it's maybe 40 percent utilization.
So there's all this kind of headroom, which doesn't need to be accounted for. And we, and it's a bit like service, you know, we have, as software engineers, we're taught generally to size for the maximum output because the loss of business is supposed to be worse than the cost of having that extra capacity.
But in 2024, there are new approaches that could be taken. And we do things like serverless and scaling things down. And these ideas are - they've been slower to be adopted in the networking field essentially, right?
Romain Jacob: Yeah, that's kind of the, that's kind of the idea.
Chris Adams: Brilliant! Okay, that is good. We'll share a link to the paper because it's quite a fun read and I really helped, it stuck with me ever since I saw you speaking about that.
Okay, so we've kind of set some of the scene so far. We've got some nice mental models for thinking about this. We referred to the energy and the embodied part and I guess the thing we didn't mention too much was that the growing old thing is going to be, you know, more of an issue over time because while we're getting better at decarbonising the electricity of the internet, we're not doing such a good job of decarbonising the extremely energy intensive process of making electronics right now. So we're only, this is only going to become more acute over time.
So, maybe I can allow you to just talk a little bit about this Hypnos paper, because as I understand it, it was an extension of some of this vision going forward, and I know that it wasn't, you weren't presenting yourself, but I know it was your team who were presenting it at HotCarbon, so maybe you can talk a little bit about that, and maybe say who was presenting, or some of those things there, because, yeah, I enjoyed reading this, it was quite fun, it, similar way, I enjoyed it as well, basically.
Romain Jacob: So Hypnos is a, a recent proposal that we've made to essentially try to quantify this sleeping principle. So in two and a half years ago, we said, okay, we could look at the embedded aspect, we could look at the operational aspect and how to improve them. And we thought back then that the one way was to just try to apply those principles of heaping to wired networks.
And so together with some math students from ETH, we started looking into this and say, okay, in theory, we know how to do this. Let's try. You know, let's try for real, let's take some hardware, let's design a prototype, protocol that would just put some things to sleep and see what happens. What was surprising to us was that the, the theory of how you would do the sleeping in a wired network and how much you would expect to save by doing that was old.
It was the first papers go back 2008 or so, so there's been a while and back then, people were saying, okay, assuming we have hardware that, that allows us to do everything we want, then we could implement seeding in this way and then we would save so much. So they knew that the proposals that were made, they were making back then were not readily applicable.
And so we felt like 15 years later, it's kind of interesting to see where are we today? Like how can we do things? And the key element, key there, was how quickly you can turn on something. You turn it off, you can always turn off something, you take some time, you save some power, okay? But then, eventually, if you need it back on, you want it to react quickly.
You know, I talked about the screen of your phone before. It's always off, and it's fine, because as soon as you press the button, and you touch with your finger, the screen lights up, right? It feels instant, right? So it needs to happen quickly to be usable. Except that in networks, it's not like that, it's, I mean, not, not, at least not today.
So if, if you think about a link that connects two routers, this was the first, the first thing that we started considering. Okay, let's put that to sleep. It's essentially the smallest unit in a network that you could put to sleep.
Right.
Chris Adams: A bit like a lane in a, like a multi-lane in a multi-lane car road.
Romain Jacob: Yeah. I would think about the road network, like, turning off a port or turning off a link in a network would be like. Cutting one road in your network. You know, like here in the city, you have many different ways to go to different end points and you would just say, okay, this street is closed. So you can't use it.
Chris Adams: Ah, I see. Okay.
Romain Jacob: That's kind of like the simplest thing one can do from an networking perspective, except that to turn the thing back on, to reopen the street would take multiple seconds.
Chris Adams: Mm.
Romain Jacob: And it doesn't sound like much, but in the networking area, multiple seconds is a lot of time because a lot of traffic can be sent during this time.
And. If you make things short and not too technical, it's way too long.
Chris Adams: So we're looking, you want milliseconds, which are like thousandth of a second, and if something, it takes two or three, it's two or 3000 times slower than you'd like it to be basically.
Romain Jacob: Exactly. So, without getting too nerdy and too technical, the problem is, we can't do what we're suggesting in the literature because we cannot sleep at short timescales as we were planned. And so we're like, okay, so is it over or can we still do something? And so what we were thinking is maybe we cannot sleep, you know, at millisecond time scales, but we can still leverage the fact that networks, some networks, are a lot more used during the day than during the night.
So we, we have a lot of patterns that are daily or hourly that we can leverage to say, okay, well, we have a predictable variation in the average use of the network. And so when we reach the value to declare night time. Then maybe there are some things we can share. And so we, we try to implement a protocol that we do do.
We say, okay, let's do the simplest thing possible and see how well it works. And Hypnos is essentially the outcome of that. So in essence, it's a very simple tentacle that looks at all of the roads, so all of the links in the network and how much they are used. And then we start turning off the, the unused one.
Until we reach some kind of like stopping condition that we say, okay, now it's enough. Like the rest we really need to keep it. At a high level, this is what we do. So one, one challenge was to get actual data to test it.
Chris Adams: Mm-Hmm.
Romain Jacob: Because if you stimulate a network and you stimulate the utilization of your network, you can make things as pretty or as, as ugly as you want, you know, depending on how you look at things.
And so it was, what was really missing from the literature was precise case study that says, okay, here is the data from a given ISP. Here's what the network looks like, and here's what the utilization looks like. In this network, what can we do? So, there has been a long, very long effort to actually get this data.
And then, the Hypnos paper is essentially say, okay, we have the protocol, we have the theory, now we have the data. Let's match the two things together and see where that takes us. And, we looked at two internet service providers that, that belong to the access part. So, those are, networks that are very close to the end user, where you would expect more of D&I fluctuation.
And we do see that. What we were a bit surprised to confirm is that those networks are effectively underutilized. You want to dare a guess what's the average utilization in those networks?
Chris Adams: I literally couldn't, I have no idea what the number might actually be to be honest.
Romain Jacob: Guess!
Chris Adams: Okay is it like Okay, so I said the national grid was about 40%. Is it like 40, 50%? Like, that's, like, not-
Romain Jacob: Four, four zero?
Chris Adams: Yeah, four zero is like what I, is what national, electricity grid is. So maybe it's like, something like that, maybe?
That's my guess.
Romain Jacob: Now you're an order of magnitude too high. So we are talking a couple of percent.
Chris Adams: Oh, wow! Okay, and the whole point about the internet is that if you don't have one route, you can still route other ways. So you've got all these under, you've got all these things which people are currently on that almost no one is using ever, basically, at like 2%. Okay, alright.
Romain Jacob: I need to modulate this, right? Okay. For the couple of networks that we got, we managed to get access to the data, right? So I'm not claiming this is the general number. I would love to know, if you have data, please let us know.
Chris Adams: Mm
Romain Jacob: But for the networks we could get access to, this is the type of numbers you would see. An average utilization of a couple of percent. And again, going back to what we were saying before, in a data center, things would be different.
Chris Adams: Mm hmm.
Romain Jacob: I actually don't know because I was there a little in data center networking, but I would expect things to be more in the 40 50 percent kind of like what you were mentioning before.
But in an end to end service provider network, the underutilization is extreme. There are various reasons for that, but it tends to be the case.
Chris Adams: That's really interesting, because when you look at data centers, so like, I can tell you about the service that I run, or that our organization runs, the Green Web Foundation, so we run a checking service that gets around between 5 and 10 million, like, checks every day, right? So that's maybe In the order of like 400 million per month, for example, something like that.
It's, a relatively high number, for example, and even when we have that, we've got around 50 percent, we, we did, we started working out the environmental impact of our own systems recently, and that's with us with utilization around 50 percent for our systems and in cloud typically you'll see cloud providers saying oh we're really good we're 30 or 40 percent like the highest I've seen is Facebook's most recent stuff about XFaaS and they say oh yeah we can achieve utilization of as high as 60 odd percent right but for lots of data centers the kind of old Data centers would have been in the low digits.
And you've had this whole wave of people saying, well, let's move to the cloud by making much better use of a smaller number one. So it sounds like the same kind of ideas of massive underutilization and therefore huge amounts of essentially hardware, you know, it seems like it's somewhat similar in the networking field as well.
And there's maybe scope for reductions in that field as well. Okay.
Romain Jacob: Exactly. And so, I want to make it clear, like, it's not happening this way because operators are idiots, right? It's just, there are a number of reasons why you have such underutilization. One what I would say is probably the main one from a decision point of view is that you want to provide high performance for number of connections in your network.
So to reach from point A to point B, you want to make sure that you want to have the lowest delay typically. And that requires to have a direct line. Do you have other concerns that are that things do fail in networks. Link failures happen and they can be quite drastic. You operate like a physical infrastructure in a country where people leave and work, you have incidents, fibers get cut, and those are things that take a long time to fix and so on.
So you want to have some resiliency in your network. So that if some part of the network goes down, you can still reroute the traffic the other way around and still have enough capacity to serve that traffic. So you have some names that are not used by default intentionally. So you get to a 2 percent or a couple of percent average.
But you don't want to be at 50 percent because if you are at 50 percent and something really goes down, then you may run into a situation where you don't have enough capacity left to run your business. So the point is, we have such a high underutilization and something that I don't think we explained so far is in network equipment, so routers and switches, you have very little proportionality.
What I mean by that is that the amount of power that is drawn by a router. It's essentially, from at a height, it's not exactly true, but at a high level.
Chris Adams: Hmm.
Romain Jacob: The amount of power drawn is almost independent or varies very little if you send no traffic at all or if you send at 100%.
Chris Adams: Okay.
Romain Jacob: So, what it means is that if you have a router that you use at 1 percent of its total capacity, you pay almost 100 percent of the power.
Chris Adams: It's almost like one person in that plane going back and forwards, for example. Like, if I'm going to fly, there's, you know, if I'm going to fly somewhere and I'm the only person, it's going to be the same footprint as if that plane was entirely full, for example.
Romain Jacob: Kind of, yes. It was the same kind of idea. And this is why for us, investigating this clipping was kind of interesting because we know we have such method underutilization, although I probably would not have guessed it was that low, and it wastes a lot. Third, we are essentially operating most of those links at the worst efficiency point possible. And so we try to remedy this and it goes one step in that direction.
Chris Adams: I see. And I think one thing that you mentioned before was this idea, you can power these things down and you know there's very, because there are alternative routes through the network at any time, it may be the case that even if you do have these things powered off in response to upticks in demand, just like with, say, national grids, people might. You know, switch on batteries to, or feed power into the grid from a battery or possibly a peak of gas plant.
You have, you still have the option of switching these route, these links back on when there is mass a, a, a big peak in power, for example.
Romain Jacob: Exactly. And it's the same with the kitting protocol we proposed, right? So, it still makes sense to have those redundant links deployed, you know, those fibers laid out. And then you may say, yeah, but we've paid all this effort to actually install this and it's there. Why should I not turn it on? Well, because it consumes energy whether you use it or not. That's for one. And second, It's good to have it in case you need it. But you can turn it off so that you save energy. If you can turn it up quickly, right, then it goes back to what I was saying before. The turning of quickly part is still problematic today.
Chris Adams: Hmm.
Romain Jacob: So orders of magnitude that the time it takes to actually do this would be in 10, 10 seconds, roughly a few seconds, let's say, up to more, a minute or so.
So then what, what do you have to wait? Yeah. The benefit you gain by turning links off in terms of energy versus the time you may have to wait until you go back to a good state in your network in case you have some failures in your network. And of course you need to multiply that risk by the likelihood of getting such link failures.
So if, let's say, if you have a doomsday event that, you know, will just kill the network error, but that happens one every hundred year. Maybe you can be fine having a day to day management policy that says, okay, to manage this doomsday event, we will need an hour long, but in all the rest of the time, it will be fine and we'll save energy every single day.
You know, you, you have weighed the pros and cons of a strategy in terms of performance and in terms of energy usage.
Chris Adams: And presumably, one thing that you've entioned is because you mentioned that you often have these regular, kind of, predictable cycles, like, most people don't, you know, fewer people use the internet when they're asleep than when they're awake, for example. Like, it sounds really silly, but like, yeah, you're going to see these predictable patterns.
Some of the work with the Hypnos paper was basically, essentially taking some of these things into account. So you can say, well, you need to have this buffer, but we don't have to have the buffer massive you don't need to have every single car in the world engine on idling just in case you need to use it you can turn off some of these car engines for it so that was the kind of idea behind this.
Okay neat so I've used this car model a few times but it suggests that actually think about how, the amount of energy usage and how it scales with how we use the Internet. It might not be the correct mental model. And I just want to kind of run this by you, because this is one thing that I've been thinking about recently is that a lot of us tend to instinctively reached to a kind of car and driving and burning fuel model, because that's how a lot of expo experience costs of energy a lot of the time, right?
But it feels almost like if you've got this thing, it may be a different model might be like, I don't know, like bike lanes where there's a matter of time that you need to build something. You might need to light, make sure a bike lane is well lit, for example, if you do use, but the amount of people using the bike lane that isn't the big driver of emissions in this, for example, maybe something like that.
Romain Jacob: Yeah, I was about to say, I think the analogy is not wrong per se, it's just a matter of the trade off between the, the infrastructure cost and the driving cost, let's say. So let's assume you, you're using a, a mean of transport, whatever that may be. That has a cost X per kilometer, but then you need light and you need, I don't know, cooling or if you're using something that works on under like, I don't know, superconductive environment, then you need extreme cooling. And so the cost for the environment gets very high. I think the superconducting thing is actually a, a, a pretty, a, a much, very much closer analogy to how the way network works.
And you need to spend a lot of power, or to draw a lot of power, just to get the infrastructure on. But once the infrastructure is on, once you get your superconductive environment, then traversing this environment is very cheap.
Chris Adams: Ah, okay.
Romain Jacob: And networks are a bit like that today, right? So turning the wires on costs a lot, but when it's on, sending the bits through the wire, it's pretty cheap.
Chris Adams: Ah, I see, and if I understand it, when I've spoke to other people who know more about networking than me, they've basically told me that at some levels, even when you're not sending any data, there is a signal being sent that basically says, I'm not sending data, I'm not sending data, I'm not sending data, just to make sure so that you've got that connection so that when you do send some data, there's a fast response time.
So, just because we aren't perceiving something doesn't mean there isn't energy use taking place for example. So there's maybe some leakiness in the models that we might instinctively just use or intuitively try applying when we're trying to figure out, okay, how do I make something more sustainable for example?
Romain Jacob: Yeah, this is very true. And it also, it gets a bit more detailed than that. It also depends on the type of physical layer you use for sending your information. In networks today, you have, I think, I guess we could differentiate between three main types of physical layer. One is the electrical communication, so you send an electrical signal through a power rail.
You have optical communication, so essentially using light that you modulate in some way. And then there's everything that is kind of wireless and radio wave communication. So I'll leave the wireless part out because I know less about it and it's a very complicated bee. But if you compare electrical to optical, things work kind of differently.
In the electrical environment, you, you can, you have essentially a physical connection between the two points that try to talk to each other. And so, when the physical connection is there, you may send messages as you were saying before, like, I'm not sending, I'm not sending, I'm not sending, but you can do this, for example, once it be, I don't know, 30 seconds or so.
It will be enough to, yeah, keep the connection alive. Whereas if you use optical, it's different. Because if you use optical communication, the line does not exist. The line between the two exists only because you have a laser that is sending some photons from one end to the other. So, where it's different is that a laser is an access component.
You need to, to, to send energy to create this link between the two ends.
Chris Adams: Mm.
Romain Jacob: And so, now, it's not that every 30 seconds you need to say, I'm not sending anything. It's like, all the time, you need to have this laser on so that the two endpoints know they are connected to, to each other.
Chris Adams: Ah, okay.
Romain Jacob: And it's actually one of the reasons why the early ideas about tweaking are not so much in use today is because they don't work nicely with optical communication.
And optical communications are the de facto standard in networks today, in the, in the core of the internet and in data centers as well. For reasons that we don't have time to detail, optical is the primary means of communication. And it is by design the laser needs to be on for the link for the communication to exist.
Whether you send data or not.
Chris Adams: Alright, okay, thank you for elucidating this part here. So it sounds like the models we might use a lot of the time, as lots of, when you're working with digital sustainability, it's very common to look at a kind of figure per gigabyte sent, for example, and like in some cases It's better than having nothing, for example, but there is a lot of extra nuance here.
And, there is, we have seen some new papers, I think there was one paper by David Mytton, who, that we'll share a link to, he's been speaking, he's, he shared one recently about the fact that, there are other approaches you might take, for example, for this. There, if you could, just brief, it'd be really nice to just touch on some of that, if we could, and then just, and then to add some extra nuance, realize that, like, It's not that there is no proportion, because there is something you need to do.
Maybe we could just talk a little bit about some of the things that David Mytton's been proposing as an alternative way to figure out a number here, because I'm mainly sharing this for developers who get access to these numbers, and they want to make a number go up or down, and it's useful to understand what goes into these models so that you are incentivizing the correct interventions essentially.
Romain Jacob: Yeah, of course. So that's actually a very important point, I think. You will often find if you look in the, on the web or anywhere. Figures are in energy per bit, or energy per X, or energy per web search, or energy per email sort of thing. Whatever we can think about whether computing such numbers make sense, what, what is very important to understand is that those numbers were derived in an attributional way.
That means that you take the total power cost of a system. And then you divide by the number of bits that were transmitted. If you take a network, you take the sum of the energy consumption of all the routers and all the links and all the calling and all of everything. And then you look at the total amount of traffic you've sent over your reporting interval, like a year.
And you take one, you divide by the other and ta da, you get energy per bit. That is interesting. That is interesting to get an idea of how much, how much energy you spend for the useful work you've done in that network. But it should not be interpreted as, this is the cost for a single bit, because if you do this, then, and that would be a different type of reasoning that we call consequential reasoning.
If you do this, you would then draw the conclusion, the wrong conclusion, that if I have a network that has, I don't know, a hundred kilowatt hour, if I send a hundred gigabit more, I will use ten kilowatt hour of-
Chris Adams: It increases entire system by that rather than my share of this, for example.
Romain Jacob: Exactly. Except that it's not true. It's not true because the total number in watt, in energy per bit accounts encapsulate all the infrastructure costs. And those infrastructure costs are constant, they are independent of the amount of traffic. And so this summary statistics is useful in order to track the evolution of how, how much is used, how much is your network used over time.
But it's not good to predict the effect of sending more or less traffic. And it's a subtle thing that if you overlook this, you can make the very wrong statement and make bad decisions. And so this is what kind of like these papers you refer to try to highlight and explain. And say that you need to have a finer view on the, the energy per unit that you're interested in.
It's a bit more subtle than that. People should read the paper. It's a great paper. It's very accessible. It's not too technical, I think. And it's great for people that are interested in this area to get a good primer on the challenge of computing the energy efficiency of a network. I think it's really a great piece.
Chris Adams: Okay, cool. So basically the, I think one of the implications of that is that let's say I'm designing a website, for example, if I make the website maybe half the size, it doesn't necessarily mean I have half the carbon footprint of it because some of the models we use and they're popular, they are an improvement on having nothing, but there's extra nuance that we might actually have.
I say this as someone who works in an organization where we have a library, we have one, we have a software library called CO2.js. We have a transfer based model for this because this is one of the ones that's most common that is just like one of the defaults. We also have like an issue open specifically about this paper because there is, when you're starting out, you will often reach for some of these things for this.
And while there's benefit and there's some value in actually having some of these models to help you work out, it's also worth understanding that there is extra nuance to this. And they can end up with slightly different incentives for this. This is something we'll talk about carbon aware as well because again, different ways you measure the carbon intensity of electricity can create different incentives as well. So, like, this is one thing we'll be, I guess, we'll be developing over time, but the thing that I just, if we may, I'm just going to touch on this other thing before we move on to kind of wrap up on this.
This can give the impression that there is no proportionality between using digital tools, and, like roll out of extra infrastructure and if you said there's no link that would be an oversimplification as well and I think we're gonna one of the previous guests Daniel Schien he came on he spoke about some of this and maybe you might paraphrase some of this because I think his this perspective is also very helpful and kind of illustrates why we need to be doing coming up with better models to represent this stuff.
Romain Jacob: Yes, exactly. Thanks for bringing that up. That is also extremely important. So I've said before, power is kind of constant. It doesn't depend so much on how much you send. Two, two things to keep in mind. First, there is some correlation. So if you do send more traffic, there will be an increase in power and so you will consume more energy. That is true. And the work that I'm doing and fuel make that even more so in the future. So what we are trying to do is essentially say. We tried to find ways of reducing the power draw when you're under low utilization. And if we were successful in doing that, by sleeping and by other methods, then it will create a stronger correlation between traffic and power.
Right? So, and this is actually good, right? For energy efficient theories, the closer you are to proportionality, the better. So if we are successful, then the correlation will increase. And then sending more bits or, or having smaller website will have a stronger impact, in energy consumption in carbon footprint.
So that's one aspect.
Chris Adams: Mhmm.
Romain Jacob: The second aspect was also extremely important. You mentioned already the, the work of Daniel Schien that is great about this is to think about the internet in, in, in a different timescale. If you look at one point in time right now, the network is the static element. There's so many nodes in the network.
There's so many networks and therefore today, if I send more traffic; be low impact. However, if you put a longer timescale and you look at a one year, six month, or ten year horizon, what happens is that when people spend more traffic, you see the utilization of the links going up, and that will have a future consequence of incentivizing people to deploy new links, to increase the capacity of the network.
Chris Adams: Hmm.
Romain Jacob: Which means that over a year, so over time, as you send more traffic, you create more demand. As you create more demand, you will create more offer. That means scaling up your network, and every time you scale up, almost away, you will increase the energy consumption of the network, right? So, you will further increase the infrastructure costs.
So, as you send more traffic, as you watch more Netflix today, It does not consume more energy, not so much, but it will incentivize the network to be scaled up, and that will consume more energy. So, there is a good reason to advocate for what is known as digital sobriety, to be, to try to use less of the network or to make a more sensible use of the network, because if we use it more, It will incentivize future increase of the digital network size and therefore future increase of the energy consumption.
Chris Adams: I see. Okay. So basically, if you like set these norms of all this extra use, even though you're not making these changes in the meantime, on a kind of large, on a multiple, multi year timescale that people make investments, like infrastructure investments on, they would then respond to make sure that they've got that kind of headroom available over time.
And I think that's actually some of the work that Daniel has been doing to model, because there is no way that we can do that. We're going to deploy new infrastructure where there being zero carbon footprint, even if everything is green. So there is a, there's an impact there that we need to be mindful of.
That's some of the work that he's referring to there. We'll share a link to that paper as well, because I'm not quite sure how Okay, I know that we can't model that in co2.js, for example, in our library, but he's, this is literally the cutting edge work that I think he's been doing. And he's, last time he was on, he was hiring for some researchers to find out, okay, how do you represent this stuff?
Because when we think about large organizations of the scale of Amazon or Microsoft who are spending literally tens of billions of dollars each year, then you do need to think about these kind of multi year, decade style infrastructure kind of investment scale. Okay. All right. So, we've gone really into the details, then we've spoken at the kind of macroeconomic level now.
I wonder if I can just bring this back to the kind of frame for developers who are like, oh, this sounds really cool. How do I use some of this? Or what would I do? Like, if you wanted to have an internet that was able to kind of sleep more and could grow old, are there any ideas you might use? Like how might it change how you build, for example?
Are there any kind of sensibilities you might take into account? Because as I understand it, some of the things with Hypnos were primarily designed to say, this is how you can do this without forcing people to make too many changes at the end user level. But there may be things that as a practitioner you might make things more conducive to or something like that, for example.
Romain Jacob: Yes, definitely. So I think it goes back to the question we had just before about Daniel's work, about looking at the longer time scale perspective. I think as an end user, as a software designer, I think thinking about sobriety is something that everybody should be doing. It's not just for sustainability.
I think one very recent Environment Variables podcast was about the alignment between the sustainable practices and the financial operations, and in many cases, those two things align. In a similar mindset, if you think about web design, this, if you look at the system, the WebW3C sustainability guideline, they align pretty much almost perfectly with the accessibility guideline and with the performance optimization guideline. Why? Because a smaller website will also load faster and, you know, get people faster to what they want. This is the content they really want to consume. So, there's general value into being as, modest in your demand from the system or from the network as, as possible and for the compute as well.
It's the same thing. Today, it does not yet translate into net benefits. At the network level, but it might in the future. And, you know, somebody has to stop. So you need to, I need the efforts of all sides in order to, you know, make that work. In the networking domain, I know there have been some people studying this from a theoretical point of view where you would say, the end user could be able to say, I want to, I want to place a phone call.
But I'm willing to wait for, I don't know, 20 seconds or 30 seconds before my call is being played. And if you have an ecosystem of users of that network, where the sufficiently large share of users are so called delay tolerant, then you can optimize your network in order to save in resources. So saving energy and ultimately in reducing your carbon footprint.
In the more traditional networking domain, one could envision something like that. There is no work in this area, as far as I know. One way you can think about this would be, the incentive would be pricing. That you, you, you could say, okay, I'm, I'm winning. So not so much to wait the most, most likely to cap the bandwidth I can get out of the network.
But you would, if you were to say, I'm winning to get at most, I don't know, 100 megabits per second in high utilization times, then you would get a discount on your internet deal. I think that's sizable, that's possible. One quick working thing that could happen. But that would be more like a global effect, like it's between the user and the internet service provider.
If you're a software developer, If you think about how your application could be built in such a way, I honestly don't know. I think it's extremely easy today, technically speaking, to have any sort of flagging where your application can say, I'm data intolerant, I can wait, I will not use more than an egg.
One can do that, that's easy. Your network can get that information. The tricky bit is, how would the network then use that information to route traffic in a way that would save energy? That is much trickier.
Chris Adams: Ah, okay, so that sounds like a possible route that people might choose to go. Because I think, I, I know, for example, there is some work in the world of streaming, where there was a notion of a, I think it was the gold button, that was put together by the Greening of Streaming group. They were basically saying, look, most of the time, I, if I'm looking at television from across the room, I can't really tell if it's 8K or 4K.
So, allow me to, you know, have a default which lets me kind of, reduce the resolution or the quality so that when there's lots of people trying to use something, we can see the amount of data reduce somewhat. And that reduces the amount of kind of extra peak capacity people might need, for example. These are some ways to kind of make use of the existing capacity that lives inside the entire network to kind of smooth off that peak as it were, for example.
That's some of the stuff that we might be looking at. So it's very, in some ways, it might be, kind of providing hints to when you send things over the network. And I think there's actually some work that we've seen from some existing tools. I know that Facebook's serverless platform does precisely this.
And there is also some work in Intel, we'll share some links to this, where when you have a computing workload, you can basically say, well, I'm not worried about when this gets delivered, for example, or I have a degree of, as long as it happens before this time, it's okay. And this does provide the information for people running these systems to essentially, like, move things around to avoid having to increase the total capacity, for example.
So you make better use of the existing capacity you have before you have to buy new capacity or deploy new wires or anything like that. That seems to be what you're kind of suggesting.
Romain Jacob: In the cloud computing world, that does exist for real. Yeah, for sure.
Chris Adams: Okay, so there's a possible path for future, future research. And maybe this is the thing I, this is what we can kind of wrap up on. So we've spoke and we've done a dive into sleeping and getting old. Right. But that's not the only tool available to us. Are there any kind of papers or projects or things that you would direct people's attention to that you think is really exciting but may not necessarily be in your field that you think is worth, that you're excited about?
For example, because you spend a lot more time thinking about networks than I do, and I'm pretty sure there's some things you might say that that, that other people listening here might, might enjoy following, for example.
Romain Jacob: Yeah. Yeah. So I think two things come to mind. The first is that today I talked about protocol adaptation, about, you know, we, we would put things to sleep and save energy and so on. But one, one key problem we have, we as practitioners in this field and researchers and operators is the lack of visibility in the power data.
It's actually really hard to get a good understanding about how much power is going to be drawn by a given router, depending on the amount of traffic and depending on how this thing is configured and so on. So, this became very clear at the beginning where we started working on this. And so, a big part of my research has been to try to, to develop tools for building datasets to aggregate such power information in a way that people can contribute to and then use in their own research, do their own analysis.
And try to do some predictions about, okay, now, if I were to buy this device, for example, it will cost me so much in embodied carbon footprint that this is how much I could hope to save. Because I know how this device typically operates and how much it consumes. And so, this led to a dataset on a platform project we call now the Network Power Zoo.
Which is actually a reference to another very well known networking data set that was the Network Topology Zoo. It's kind of like a historical reference to that. But it is really a zoo in a sense that it's very broad. From devices that look the same can consume from, I don't know, two, three times more magnitude power.
Whereas it seems there's the same number of ports, the same part of the number of connections, but it can change drastically. So, it's still a work in progress. I mean, the database is in building and we're starting pouring data in. And very soon we'll do an open call for anyone to contribute their own data sources into this database.
So that people can have access to richer power related data.
Chris Adams: For a data informed discussion. Yeah.
Romain Jacob: So that's one thing that is very active for us. It's still very much related to networking, but that's one thing that is not related to protocols so much. More generally, what I think has become clear and clear to me is that if we want to address the sustainability problems in networking, what we need are not really networking researchers, because a lot has to do with the hardware design, and the hardware architecture, and writing good software for optimizing the hell out of the hardware we get.
And those are just not the typical expertise that you find in networking people. So, networking people are a bit at a protocol, but, you know, they don't know as much how the hardware is built and designed. Maybe I should not make such generalities, but it's definitely true for myself. So, we've been poking more and more people from the computer architecture area, from the hardware design to collaborate with us and say, okay, look, we have those sorts of needs.
This is mainstream in embedded systems for 20 years. We still don't have it yet in routers, it's hightime, we need it now. If anyone is working in this area and is interested, please reach out to me, I'll be happy to chat.
Chris Adams: Brilliant, thank you for that. Okay, and I suspect there's some, I'm just looking through our notes. We actually have been in touch on the E Impact mailing list, which is one project, but I think it's by the IETF. That's one of the things that we can share a link to where there's often quite a lot of, if you want to go into the networking, that's probably one of the deeper ones I've found.
Okay, great. I think, is there anything else? I should, we're just coming to time, so I just want to check. This has been really fascinating, and I've learned a huge amount from this. So, if people are interested in the work that you're doing or they want to learn more about this, where should people look to find more about this?
Like, we'll share a link to the paper that you worked on, for example. But beyond that though, where do we find out what's going on with Romain Jacob and his team of research, researchers?
Romain Jacob: So, nowadays the best place to find me would be on LinkedIn. So, we'll add a link to my profile, but my name is not that common, usually I'm findable on LinkedIn at least. Yeah, that's the best place for you to reach out, I'm quite reactive there.
Chris Adams: Okay, cool. So I'll share the link there. I'll also share the link to HotCarbon, which had the paper that you had. If there's a, and if there's a link with ETH, the research institution you're part of, I'll add a link to that as well. Brilliant! Well, this has been really enlightening for me and hopefully other people who've been listening along with this.
Thanks once again for being so generous with your time, Romain, and, yeah, have a lovely holiday over the summer, okay? Take care of yourself.
Romain Jacob: Thanks, bye bye.
Chris Adams: bye! Hey everyone, thanks for listening! Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts.
And please, do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser.
Thanks again and see you in the next episode!