CXO Bytes
Navigating AI Risks with Noah Broestl
May 13, 2025
Host Sanjay Podder brings Noah Broestl, Associate Director of Responsible AI at the Boston Consulting Group, to the stage to explore the rapidly evolving landscape of generative AI and its implications for business leaders. Together, they talk about the requirements of present and future AI governance frameworks, the road to sustainability in AI, and how the emerging risks of Generative AI are shaping the future of responsible technology.
Host Sanjay Podder brings Noah Broestl, Associate Director of Responsible AI at the Boston Consulting Group, to the stage to explore the rapidly evolving landscape of generative AI and its implications for business leaders. Together, they talk about the requirements of present and future AI governance frameworks, the road to sustainability in AI, and how the emerging risks of Generative AI are shaping the future of responsible technology. 

Learn more about our people:

Find out more about the GSF:

Resources:

If you enjoyed this episode then please either:
Connect with us on Twitter, Github and LinkedIn!


TRANSCRIPT BELOW:


Sanjay Podder: Hello and welcome to CXO Bytes, a podcast brought to you by the Green Software Foundation and dedicated to supporting chiefs of information, technology, sustainability, and AI as they aim to shape a sustainable future through green software. We will uncover the strategies and a big green move that's helped drive results for business and for the planet.

I am your host, Sanjay Podder.

Welcome to another episode of CXO Bytes, where we bring you unique insights into the world of sustainable software development, from the view of C-Suite. I am your host, Sanjay Podder, and today we have an exciting discussion lined up on the challenges and opportunities of responsible AI. Joining us today is Noah Broestl, partner and associate director of Responsible AI at Boston Consulting Group. With a career spanning Google, the Oxford Artificial Intelligence Society, the US Air Force, and now Boston Consulting Group, Noah has been at the forefront of AI safety, responsible AI, and technology-driven sustainability solutions. At BCG, he helps global business develop robust AI frameworks that balance innovation with responsibility. He is also a steering committee member of the Green Software Foundation, working on initiatives to ensure AI and software development are aligned with sustainability goals. Today we'll explore how AI governance frameworks, sustainability in AI, and emerging risks of generative AI are shaping the future of responsible technology. Noah, welcome to CXO Bytes. Before we get into details, can you please introduce yourself?

Noah Broestl: Yeah. Thank you so much, Sanjay. Yeah. My name is Noah Broestl, partner and associate director of Responsible AI at Boston Consulting Group. Super excited to be here today. You know, I think you've covered it pretty well, Sanjay, in my background and the things that I'm working on now. But I've been thinking about, responsibility and technology for well over a decade and been working directly in that space for well over a decade.

And so it's been a very exciting journey and as you're going through Oxford Artificial Intelligence Society, US Air Force, you know, a lot of really great memories working in all those spaces. But, you know, at BCG, really focused on helping people who want to be responsible in the deployment of technology

navigate what is effectively one of the most complex landscapes I've ever tried to operate in, which is we have technology laboratories that are producing on a weekly basis, what are, at least claimed to be, breakthrough technologies. We have, you know, research laboratories both academically and industrially that are producing amazing frameworks, amazing tools for responsibility. We have government entities, all over the US and all over the world that are producing guidance for how they should implement these technologies. But bringing all of those things together is really challenging and that's what motivated my move from Google into BCG, was to be closer to helping people. Now that we're seeing these technologies really have impact in organizations and in commercial applications, helping organizations figure out how you navigate any of that, it really does come down to, I think, two pillars, which are how do you govern this inside of organizations, and how do you build responsibility into the product development lifecycle? How do you enable your engineering teams, your product teams, your business teams, to really integrate responsibility as a component of the development lifecycle rather than as some stage gates that happens at the end of development prior to launch?

So, very excited for our discussion today. I'm sure we're gonna go a lot of interesting places.

Sanjay Podder: I'm really looking forward to learn a lot of things from you, you know, because I see that, you know, your interest in this topic has been for more than a decade, in fact, probably right from your education in Google and, you know, BCG. Just a quick question. You know, why? What got you interested in this?

Noah Broestl: Yeah, I mean, it's one of those questions that is, so first off, if you're ever in at a cocktail party or anything like that and someone works in responsible AI and you want to get them excited, ask this exact question. Like, what was the moment in your career that was the turning point for you, where you went into responsible AI and certainly all of the pathways that lead here, I think, are fairly winding. Like, you know, I started as a computer science major. I ended up getting degrees in sociology and history and law and data science and ethics, eventually, but I think I can point to one place in my career where I started asking questions about the intersection between, technology and society.

And one of the most gratifying roles that I've ever had was working in abuse response. So working on a major product with over a billion users and thinking about how do we protect these users from vandalism and fraud, how do we make sure that this product is trustworthy and useful to the people that we're providing it to? And you do that for a certain amount of time and you start to ask, I think, questions about why we're doing the things that we're doing, right? So if we take, for example, like for protecting a product leading up to an election, there are a couple of strategies you can take and one of them is just turn off the tap, right?

Like, just stop any user content from being posted to the platform. Identify high risk places and say, alright, we're not going to accept any UGC on this. We're going to heavily curate these features in our data set, and we're going to allow them to just sit there and not have any of the other components that you may need for modern platforms that are going to increase freshness of data and make sure you have, you know, the most up-to-date information.

So that is one strategy. You can shut everything down and protect it. Another question that you could ask there is, don't we have an obligation as a transnational platform, as a global platform, to amplify the voices of people expressing dissent with elected officials at the time when they have the most agency?

I think that's another perspective you could take on this. Right. And if you shut off the tap, are you impacting the way that people are approaching their, yeah, I guess, you know, expressing dissent with elected officials, like, are those perspectives important to amplify or are they important to protect?

And that question for me was the hinge that my career turned on, is how do we answer that question? And to answer that question, I had to make this shift from looking at my career as progressing through infrastructure engineering and technical program managements in infrastructure engineering,

and I made a shift to, first academically, I moved from a degree in data science that I was working on and went into ethics. I said, "I need to go understand how we make decisions about what we ought to do." And that degree in ethics, I did my masters at the University of Oxford at the Uehiro Center.

I was hoping that I would learn what I should do. I think I probably more accurately learned how to poke holes in what other people were doing. So I'm not sure it really gave me everything that I was looking for. No, but it was a fantastic way to understand, how do we approach decision making in these complex spaces?

And then secondly, the way that we're using artificial intelligence, and certainly this pivot in my career was almost a decade ago, when I really got deeply into this, the artificial intelligence technologies that we were employing were pretty crude versions of what we see now. And so I had a lot of questions about, you know, what is the direction that artificial intelligence is moving in and how should we be prepared for the next evolution of these technologies.? And so I moved into research, into AI research, and tried to get as broad of a perspective as I could in how those technologies were evolving and where we could deploy them, deeply understanding how they would be able to integrate with sociotechnical systems in the future.

And I remember at the time I said to my manager, "Hey, I'm going to shift my career to think about ethics and artificial intelligence." And he said, "oh, that's cute. You're never gonna make any money there, but if it makes you happy, go ahead and do that." And so it was definitely the point where my career shifted.

Definitely the point where I saw a problem, wanted to investigate that problem deeply, and that led me through all of the other things that, you know, the working in responsible AI research, the leading safety evaluation for generative products and eventually landing here at BCG, leading responsible AI.

Sanjay Podder: Fantastic. Very interesting and inspiring, I should say. I'm sure you are super excited with the growing adoption of generative AI, given the risk landscape has expanded. You know, you spoke about AI safety, you know, hallucination, you know, it's no longer just about bias and explainability and, you know, and the whole risk landscape is now so broad. You know, that brings me to an interesting question. You know, I know you recently wrote an article in the BCG website on Generative AI will fail. Prepare for it. And in the article you highlighted the inherent unpredictability of generative AI systems and the need for continuous monitoring and escalation frameworks, given that AI failures can range from misinformation to serious regulatory violations. And you just mentioned. How should organizations approach generative AI governance to balance innovation with risk mitigation, in this age of generative AI? Right. You know, earlier AI also had some of these, traditional AI also had some of these challenges, but now generative AI, you know, how do you do the governance part?

Noah Broestl: Yeah. Generative AI is different. Like the risk landscape of generative AI is different and we should definitely kind of dive into exactly why that is. But, the first thing I wanna say is, I think it's a fallacy that there is a trade off between innovation and risk mitigation. 

I think there's, it's a fallacy that there's a trade off between innovation and responsibility.

And there's a couple of analogies, one that we touch on, it's Tad Roselund who used to be our our chief risk officer at BCG used to say this all the time, which is, you know, you ask F1 drivers how, "what lets you go fast? What lets you go fast?" And their response is the brakes. "The brakes are the thing that let us go fast.

Being confident in our ability to break." And I think that's, I love this analogy particularly because if you're an F1 fan and you watch the Shanghai F1 this weekend, Lando Norris's brakes failed at near the end of that race. And as he was going around these laps, the brake distance was getting longer and longer to the point that there was a catastrophic failure in the brakes.

And you just see, George Russell catching up on him every single lap. And certainly if that race had been five laps longer, lando Norris would have come in third or fourth or fifth or sixth. Like, if you lose those breaks, you don't have the ability to confidently move at the pace that the organization can move at.

And I think that's a good analogy for the way that we should look at responsible AI programs and AI governance. It should enable us to move quickly. And we actually see this bear out in the research as well. We did a study with MIT, where we showed that the organizations that were spending the most resources and had the most interest in responsible AI,

were the ones that were also scoring the highest on innovation measurements.

 And so there's a little bit that you could say there around causality versus correlation and, you know, innovative companies are also interested in responsible AI, but the fact remains, that's, once it's implemented appropriately, it enables organizations to move quickly.

I also think about the analogy between program management and responsible AI. If you've worked in a tech organization, you know that a lot of engineers have a really bad opinion, or if you've worked in a tech organization, you know that a lot of engineers look down on program management. They see it as something that slows down velocity.

And I think that's because there's been very naive applications of program management in the past. Vanilla agile. You just come in and you say, we're just going to deploy Agile in this space without a clear understanding of where the friction exists inside of the development process. And we need to do the same thing with responsible AI.

Like we need to know all of the items that are available in our toolkit, we need to understand when it's appropriate to go more deeply into principle definition. We need to know when it's appropriate. To build risk taxonomies that align with the use cases that we'll see inside of the organization.

We need to know when we need to upskill people in particular areas. And we need to deploy those things in the appropriate order to resolve the friction that exists inside of the organization and to identify those really high risk use cases. At BCG, you know, we've implemented our AI governance in such a way that it all stems from definition of what are our no fly zones, what are our high risk areas, and what are, you know, low, medium visibility areas or medium risk areas, or low risk areas. And what we do there is we triage all of these cases that are coming in to say, these are things we're just not going to participate in, we're just not interested in the way that they're aligned with our corporate ethos.

At the very beginning, at the ideation phase, we say "this is something that we're not comfortable pursuing." We then look at all of the rest of those cases and we try to target ourselves so that we're looking at about 10% of those cases we're really giving high visibility into. We really want to be able to dig into what's either particularly high risk, or particularly new for us so that we can develop enablement materials with our responsible AI team being really hands-on with that work. And once you have that, then the other 90% you provide enablement materials, you provide oversight, you make sure things are in your AI inventory, but you're enabling those teams to really explore and do interesting things and solve interesting problems with technologies we feel more comfortable with. And, that is what really enables you to start to scale and experimentation while still mitigating those risks. And so you can highlight that innovation. Now, we can shift a bit and talk about risk as well, and novel risks and generative AI.

But I mean, does that jive with what you see, Sanjay? Is that what the space looked like as you're spending time looking at responsible AI programs?

Sanjay Podder: No, absolutely yes. And in fact, a couple of things here, right? You know, I believe we have also written about the non deterministic nature of AI, which makes it very, you know, you can't really predict all the risks, right? 

And you can't really, therefore, like a traditional system, plan to do a complete check of all the risk.

In fact, one of my areas of interest has been metamorphic testing in the past, given how AI models behave, how they're vulnerable to, you know, attacks. So, you know, how do you look at an AI system in a way that you can still manage the risk of going through the traditional way of, you know, finding out all the risks and then trying to address it one by one.

Right? 

So that has been my work in metamorphic, but even if you look at regulations like the EU AI Act, even there if you see, you know, the different type of models that an organization can have, you know, you are classifying it upfront as high risk or medium risk or low risk, you know, so that you can focus more on AI models that are serving more high risk kind of business use cases. 

I think where I'd really like to go more, then let me ask this question to you, Noah. You know, Noah, traditionally, what I have observed is when we talk about responsible AI, know, one of the risks that did not receive the importance it deserves is the impact of AI on the environment. Whether it is the emissions, whether it is resources like water, and that can be many, right, energy use. We all know generative AI is making a lot of demand on the electricity grid. When you look at the governance risk and compliance and monitoring today for responsible AI, how do you see sustainability getting integrated in the governance part. And I would therefore also like to understand what brought you closer to the Green Software Foundation because that's our sweet spot, right?

The Sustainability part. 

Would love to hear your insight on this point.

Noah Broestl: Yeah. Yeah, I mean it's a really critical point in the trajectory of the technologies right now. You know, we see these shifts. Organizations are beginning to abandon their goals around net zero commitments, and that is a direct result of the trajectory of generative AI right now. And so I think that there's a lot of discussion that's happening in the space of, "oh, how bad really is this?"

and people say, "oh, it's, you know, training one of these models is like flying 15 private jets around the world 20 times." Or, and then someone else says, "oh, training one of these models is like leaving your light on for, you know, for too long when you average it out over the year." Like the data here is, it's really difficult to track down where

the actual environmental impact of artificial intelligence is. And I think that makes it difficult to navigate the space. When I think about responsibility, so maybe taking like a huge step back and saying, you know, what is responsible AI? I think we first have to start from this space of what is artificial intelligence? And the challenge that I see here is that when people approach this topic and they say the risks of artificial intelligence are X, Y, and Z,

we have this taxonomy problem where they could be talking about anything from a simple univariate, linear regression, all the way up to killer robots, right? Like there's just this huge space that we could be talking about when we say artificial intelligence. And so responsible artificial intelligence then becomes even harder to define.

But one of the things that I've found in the work that I've done is anchoring on this word sustainability. And I think sustainability, multidimensional sustainability is incredibly important to how we deploy responsible AI programs inside organizations. And that means that we need to be thinking about the social sustainability of the systems we're deploying.

We need to think about how those impact the groups of users, the social institutions, the historical social biases that come out of these systems. We need to be thinking about the cultural sustainability of systems. We need to think about, and there's a lot of work that's being done here when we deploy these artificial intelligence systems as thought partners into, you know, the university settings.

Is it really a thought partner to the student or is it impacting the perspectives that the students have? And, Shirin Duddy out of, I think she's at Boston University now, did some fantastic work around cross-cultural impacts in student populations of these systems. So we need to be thinking about the cultural sustainability, or we're going to end up living in a very boring world in a few years where everybody shares the same opinion on everything.

We need to think about the economic sustainability, of these systems. We need to make sure that this is not the equivalent of strip mining in the way that we approach the usage of these systems. We need to think about the regulatory sustainability. Very challenging to anticipate all of the regulations that are coming in this space.

But we also need to think about climate sustainability. And I think that this piece, as I said, with the progression of energy usage at companies who are leaning into these technologies, we really need to think deeply about. I also like to think about AI technologies as kind of six components. And I think a lot of people, when they talk about AI, they're really talking, they're, thinking about two things, which is, data and compute.

And that used to be the very traditional way of approaching the limitations in artificial intelligence. You either need more data or you need more compute. Now, I think we can say that there's at least six areas that we need to be focusing on when we're thinking about limitations in artificial intelligence.

And certainly data is one of those, certainly methodology is one of those, like, it's fairly clear that just building a an LLM based on transformer architecture and implementing it as a simple chatbot is not going to get us to those super intelligent systems that people are talking about.

So we need to think about how methods are limiting us in these spaces. we need to think about workforce. We need to think about education and how people are able to interact with these systems. We need to think about application, we need to think about that. That is to say, how it's integrated into our businesses and how we can really provide value, how it can increase efficiency, but also capture that efficiency to increase productivity. But the last two, of those six are hardware and, for lack of a better term, I think I might need to split these out as I have these conversations in the future,

energy and water. So those resources that go into the data center.. So we can think about hardware as, you know, not just the GPUs, but we can also think about that as the physical build out of the data center, right? The concrete that goes into building out these data centers, the production of rebar, those types of products that result in really high embodied emissions going into these systems and then the energy and water that are needed to run these data centers in the long term. And particularly double clicking in on that energy piece, 

there's a conversation that happens in the artificial intelligence space and I spend a lot of time talking about energy and AI, and we can look at two sides of that, right?

Which is there's AI for energy, and there's energy for AI. And those are generally the two areas where people try to have these conversations. And I think that there is a tension right now between the people who are saying that AI will solve our energy problems, and the people who are saying that AI is causing more problems in the energy grids, right?

Like, and I think you could frame that as optimist and pessimist. I don't think that's the right way to frame it, but often that's seen as the way it's framed. Like, don't worry about it. The progress in AI will make the grids so efficient that we won't have these problems. And first off, I think there's reason to be highly skeptical of that.

But before we get to a decision on which one of those perspectives is right, we need to be able to measure. We need to be able to measure, and we need to have transparency into the reporting of the measurements of the emissions of these systems. And this is where I see the Green Software Foundation and what gets me so excited about the work with the Green Software Foundation, is I'm a measurement guy, like I'm a tested evaluation person. I want data to understand what's happening in the world. I certainly have intuitions, and given how long it takes to get some of this data, we have to have those intuitions. So we have to act on those intuitions. We have to have a bias for action in this space, but we also need the data to be able to do this.

The work that the Green Software Foundation has done, particularly around the carbon intensity measurements for traditional tech systems, incredibly important that we have the tools to be able to bring that data, to be able to produce that data and bring that to members internally in an organization.

The same thing applies for the Software Carbon Intensity measurement for AI, which is something that the Green Software Foundation is working on right now. And I've had a large role in helping structure that and define that. There are questions that we need to be able to answer here about how we approach inference versus training versus research emissions that go into these systems. How do we account for those? How do we look at when you train an AI system, what portion of that embodied emissions goes into each inference? How do you have a function that gives some amortization or some exponential decay to, you know, encourage particular types of behaviors with these systems?

Really hard questions that I don't think we have the answers to yet. And certainly when we look at the ecosystem for artificial intelligence right now as, at a very simple bifurcation between closed source, you know, behind API models where we have a single locus of control for those systems versus open source, where we train a model and now we put it out there and people can download that and use these open weight models.

How do you account for embodied emissions in that space? Like these are the really hard questions that we need to be able to answer from the technical perspective of measurement, even if we're able to answer all those questions, we have a completely separate problem, which is how do we give visibility and transparency into the way that these systems are using energy and producing carbon as a result of it, or other greenhouse gases or, you know, whatever it is in the hardware and construction of data centers.

We need to be figuring out the approach here, and I think this is another place where partnership with the Green Software Foundation is incredibly important. We need to be able to articulate to business leaders the value in exposing this information to users. We could wait for regulatory pressure.

Like that is one option here, right? Like, let's just wait and hope that regulators force these organizations to explicitly outline the carbon that is generated as a result of using these technologies. And there's progress there. Like I definitely want to call out that we see a lot of progress, particularly in Germany around accounting for the emissions of data centers, bringing down the emissions of data centers.

And I think it's really important that we continue doing that. But we also need, from the business side, as I said, to convince business leaders that there is value in exposing this information. And I think this comes to something that I have been looking at for a while, which is the ceiling that we see in performance of a lot of these artificial intelligence systems.

We are moving rapidly towards a space that is commoditized from an accuracy standpoint, and we will still continue to see progress. I don't think it will be the exponential progress maybe that we saw right after the release of ChatGPT, but we will continue to see progress within these systems, but we're going to see a lot of providers in the space,

they're going to be commoditized from a sense of accuracy, and so now you have to think about market differentiation for yourself as a platform provider or as a user of these platform systems inside of your business applications. And if you can confidently say that, yes, you can go to company X, and you can get a hundred percent of the accuracy, or you can come to my company, company Y and you can get 80% of the value for 20% of the carbon generation.

That is a value proposition to users that allows them then to really make decisions about how they're moving in the marketplace. All of the data that we're seeing is increase in consumer interest in the environmental impact of their choices in the marketplace. And so I think it's incredibly important that in partnership with the Green Software Foundation, that we're working with organizations not only to provide tools for them to be able to measure and report on these things internally to make decisions for themselves, but also being able to expose this confidently in a way that is mutually beneficial and presents a virtuous cycle where users are saying, yes, we believe that low carbon emission technologies are what we wanna spend our money on, and that encourages more companies to participate in the marketplace in that way. And so I'm excited about the work with the Green Software Foundation, as it helps us move both of those levers.

Sanjay Podder: I think that was a fabulous response, Noah. And I like the way you articulated different aspects of sustainability, right. Though in the Green Software Foundation for obvious reason, we are very much focused on the climate aspect of sustainability.

And I also believe that traditionally the other aspect, the social aspect, those have been you dealt with more thoroughly than the environmental aspect.

So this is therefore much more interesting to. Figure out, you know, what needs to be done when it comes to environmental impact. And that will be a big focus for the Green AI Committee of the Green Software Foundation as well, and experts like yourself who are all coming together to help us get these answers, you know, this would be the game changer for us. Right. So, Noah, I'm, you know, I'm sure the work you're doing in BCG as well as the work you are doing in GSF is going to be incredibly important for advancing this space. And before I wind out this podcast, you know, I would like to hear from you if there is one final piece of advice you give to technology leaders looking to embed responsible AI and sustainability into their AI strategies?

Noah Broestl: I wanna make two points here if you'll allow me. So the first one is on that point you made about, you know, the Green Software Foundation is focused on the climate sustainability. You know, social sustainability has gotten some focus. Economic sustainability gets a lot of focus, probably.

But one of the things that I think is deeply challenging in this space for practitioners, is that we'll never reach bedrock here, right? Like there is always going to be work to be done. In my lifetime, we will not solve these problems, and that makes it very difficult to get out of bed and do this work every single day, right?

Like these are long-term problems that we need to be focused on and we need people who are passionate about solving particular areas, and we need to ensure that we're providing the tools so that people focused on these problems can really have that for, not to overuse the term sustainability, but sustainably perform in these spaces.

Like we need to make sure that we're providing the resources and the tools. 

I think it's very important that we have community around individuals who are working in this space. So I think we need, broadly, to just connect more. Like we need to spend more time connecting. You know, go to the Green Software Foundation Summits, go to research conferences, connect, talk about these issues.

Very important that you find a community to help support you as you continue to approach these things. Now, when it comes to advice for organizations that are in this space, I still look at the maturity curve around the deployments of artificial intelligence technologies, and we're still very far on the left side of that, right?

There's still a lot of space to get to a place where we are deploying these technologies and really seeing value out of them. And so my advice to organizations is probably twofold. First off, identify a set of key stakeholders who are going to be thinking about responsibility inside of your organization.

This is not a technology problem. This is a business problem, and you need to ensure that you have responsible AI and AI sustainability and climate impact integrated as a risk stripe across your entire organization. You cannot just ask the technical components of the organization to tackle this. The second thing is, get started.

Like, get in there, get your hands dirty, and start piloting generative AI technologies. Start seeing where they will work inside of your organization. This does not mean go launch an external app tomorrow. It means find places in your organization where you know that the risk is low and the value is high to increasing efficiency inside of your organization.

Do that as a mechanism to build the muscle in your organization to manage generative AI risk. Do that in concert with the group of cross-functional stakeholders and start building your responsible AI program from there.

Sanjay Podder: Great. Well, we have come to the end of our podcast episode. All that's left for me is to say thank you so much, Noah. That was really great. Thanks for your contribution and we really appreciate you coming on to CXO Bites.

Noah Broestl: Thank you so much, Sanjay. This was fantastic. Really enjoyed being here.

Sanjay Podder: Same here. Awesome. That's all for the episode of CXO Bytes. All the resources of this episode are in the show description below, and you can visit podcast.greensoftware.foundation to listen to more episodes of CXO Bytes. See you all in the next episode. Bye for now. 

Hey, everyone. Thanks for listening. Just a reminder to follow CXO Bytes on Spotify, Apple, YouTube, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show. And of course, we want more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. Thanks again, and see you in the next episode.