CXO Bytes
Responsible AI with Dr. Paul Dongha
March 13, 2025
CXO Bytes host Sanjay Podder is joined by Dr. Paul Dongha, Head of Responsible AI and AI Strategy at NatWest Group, to discuss the evolving landscape of responsible AI in financial services. With over 30 years of experience in AI and ethical governance, Paul shares insights on balancing AI innovation with integrity, mitigating risks like bias and explainability in banking, and addressing the growing environmental impact of AI. They explore the rise of generative AI, the sustainability challenges of AI energy consumption, and the role of organizations in ensuring AI ethics frameworks include environmental considerations. Tune in for a deep dive into the future of AI governance, sustainability, and responsible innovation in the financial sector.
CXO Bytes host Sanjay Podder is joined by Dr. Paul Dongha, Head of Responsible AI and AI Strategy at NatWest Group, to discuss the evolving landscape of responsible AI in financial services. With over 30 years of experience in AI and ethical governance, Paul shares insights on balancing AI innovation with integrity, mitigating risks like bias and explainability in banking, and addressing the growing environmental impact of AI. They explore the rise of generative AI, the sustainability challenges of AI energy consumption, and the role of organizations in ensuring AI ethics frameworks include environmental considerations. Tune in for a deep dive into the future of AI governance, sustainability, and responsible innovation in the financial sector.

Learn more about our people:
 
Find out more about the GSF:

Resources:

If you enjoyed this episode then please either:
Connect with us on Twitter, Github and LinkedIn!


TRANSCRIPT BELOW:


Sanjay Podder:
Hello and welcome to CXO Bytes, a podcast brought to you by the Green Software Foundation and dedicated to supporting chiefs of information, technology, sustainability, and AI as they aim to shape a sustainable future through green software. We will uncover the strategies and a big green move that's helped drive results for business and for the planet.

I am your host, Sanjay Podder.

Welcome to CXO Bytes, the podcast where we explore the intersection of technology, sustainability, and AI with leaders who are shaping the future. I am your host Sanjay Podder, and in this episode, we dive deep into the world of responsible and sustainable AI in financial services. Today, I'm thrilled to be joined by Dr Paul Dongha, head of responsible AI and AI strategy at NatWest Group. With over 30 years of experience in AI, financial services, and ethical governance, Paul has played a pivotal role in ensuring AI is deployed responsibly, balancing innovation with integrity. He has spearheaded AI ethics frameworks at major banks, advised on government AI policy, and even taught Generative AI at Harvard Business School.

His expertise spans AI bias mitigation, responsible AI frameworks, regulatory alignment, and environmental impact of AI. In this episode, we will explore how AI is transforming banking, how ethics and risk management can drive innovation, and why sustainable AI is crucial to our digital future. Paul, welcome to CXO Bytes. Let's start by having you introduce yourself to our listeners.

Paul Dongha: Great. Hello Sanjay. Thank you for that delightful introduction. Really great to be here. So I'm Dr Paul Donga. I'm head of Responsible AI and AI Strategy at NatWest Banking Group, based here in London. So as part of my role, what do I do? So it's really split into two parts. So as head of Responsible AI, I have a team of dedicated professionals who look at people, process, and technology.

They look across the bank and ensure that the right people are involved to manage ethical risks of AI. So that's the people part. The process part is ensuring that the processes and workflows we have both within technical teams and risk management teams are appropriate for managing ethical risks. And as part of the technology part, my team work with model development, machine learning engineers, data scientists, to ensure that we have the right tooling in place in our platforms to mitigate ethical risks.

And as part of the strategy, I have a team that lays out the bank's AI strategy for the next three to five years and ensure that across the bank, all teams are working towards implementing the strategy. 

Sanjay Podder: Great. So, Paul, you know, you have had an incredible career, I can see, from AI research and academia to leading responsible AI at one of the UK's largest banks. Can you share your journey and what inspired you to champion ethical AI in financial services? 

Paul Dongha: Well, Sanjay, as you say, I've had a long career, so it's a long story, but I'll try and be brief. I mean, look, I started programming in the 80s, right. A long, long time ago. And I was just, taken with programming. I love being technical. And I was lucky enough to study computer science at university.

And as part of my one final year project, I just got into AI and I thought, "wow, this is really exciting." That led to me eventually doing a PhD in artificial intelligence in the early nineties. And I used to teach natural language processing. I used to teach AI. And I found it super fascinating, but as an academic in the 90s and being in probably the third AI winter, there was actually no jobs in AI.

Which is kind of really weird to say, but right now, looking back, there was literally, it didn't really exist as an industry. So I had a choice. I could stay as being quite a poor academic choice. I could stay as being quite a poor academic in a field that was looked like it was going nowhere, or I could leave and get a job in a commercial enterprise. So I chose the latter, right? So in the late nineties, I came to the city of London and I spent 20 years working in various investment banks, always building systems, so building complex bank-wide, either risk management, pricing, derivative systems, and so on.

But I always had this hankering to go back to my passion, which was artificial intelligence, and I think it was around 2015 I started seeing AI popping up, you know, Netflix, Amazon, collaborative filtering, and I got to thinking, "hold on, this is AI. This is kind of the stuff that we used to talk about."

And over the next sort of two, three years, we saw more of it in mainstream news, right? Google were doing AI research, Amazon, Facebook, all the big tech companies. And I guess it was about 2009, '18, '19, my kind of midlife crisis. I thought to myself "well, look, do I want to go and do that work as a fairly old person, which is a passion, or do I just carry on working in banks, doing my thing?"

And I made the decision that, look, I'm going to, I'm going to go, I'm going to leave my career and go back to AI. So I spent, about a year, I sat at my desk, I did loads of research, kind of caught up on a lot of the AI work that happened for the last 15 years. And it was amazing. You know, we have Keras, we have TensorFlow, we have frameworks.

You know, you can build machine learning applications in days rather than, you know, when I was working on it. And really quickly, I happened upon ethics and I thought to myself, what does ethics got to do with AI? And really quickly it became apparent we have a problem, right, that the probabilistic approach to AI, the so called transformer architecture that we have now, this is only an approximation to what we really want to do.

So it was very quick. I just realized that there'll be no end of technical people building the most powerful AI, but how many people really understand the ethical risks from the ground up, from building them and being able to take a view as to how harmful they could be and what those risks were. So I decided this was it.

This was exactly what I want to carry on doing. So I worked for a tech company. I headed up AI ethical research for the European division for about a year and a half. And then I went to Lloyds Banking Group as their group head of data and AI ethics. And then NatWest Group, running strategy and AI ethics.

And Sanjay, it's amazing. I'm doing the work that I love in an area that I think is really urgent, that we have to pay attention to. So that is my journey over sort of 25, 30 years. 

Sanjay Podder: Wonderful. And, you know, with the advent of generative AI, this risk landscape just gets more complex, right? Or responsible AI, you mentioned trust, ethics, you know, safety of AI. You know, one thing that brought both of us together was a very different aspect, which is sustainability of AI. You know, when we first connected over the topic, it was about environmental impact of AI.

And what I have observed myself is that traditional, you know, responsible AI frameworks, they tend to ignore the environmental impact largely, but that is changing. What has been your observation? Do you think sustainability should be a first class citizen when we look at a responsible AI framework? 

Paul Dongha: Sanjay, absolutely. And 

I think what really triggered it, was the launch of the transformer architecture. The famous 'attention is all you need' paper. And when that was embodied within ChatGPT, we started looking at actually how much compute is involved in just satisfying a simple prompt.

And there are billions of, they're called FLOPs, floating point operations. And when you really look at that architecture, you think, wow, that is really quite something. Compared to a Google search pre gen AI, when you compare the two, you realize this is a significant undertaking. And imagine scaling that up, not just for ChatGPT but for applications for different use cases, cross industries, the consumer market, as well as the corporate market.

This technology is so diffuse and has permeated everyone's lives. It became really apparent that this wasn't going to be a technology that just large corporations can use. It's going to be a technology that's just used everywhere and used everywhere very quickly. And some people talk about exponential acceleration and so on.

And I came to realize, actually, we really need to pay attention to this, although it was talked about the kind of the side of ethics. So if you look at the, the EU's high level expert group, when they came up with their seven responsible pillars of AI, it was mentioned. But was there actually anything happening on that?

I don't think so. So now what I talk about is exactly as you say, Sanjay, sustainability needs to be a first class citizen in ethical risk management. We need to treat it with the same seriousness. We need to increase our level of awareness of it. And organizations need to pay attention to what they do

with AI. So yes. So my, I guess the new approach now for me is to talk about this and to work with organizations to ensure that steps are taken to mitigate the use of, you know, reduce climate impact through carbon reductions, reduce the use of water, usage for data center cooling, and actually optimize the operations of Generative AI. 

Sanjay Podder: Absolutely. And, you know, there is so much of more thinking to be done around, you know, what are the, how do we measure the environmental impact? What should be the thresholds? How do we comply with regulations or, you know, the business's own standards? You know, what are the things to monitor? What's the impact on water, for example, and other resources, not just, you know, energy? So this whole area is becoming so much deeper and needs almost the same amount of focus as we have traditionally looked at areas like, you know, explainability, bias, and so on and so forth, right. So that's, really a very interesting area to further explore.

So Paul, you know, you're from financial services. And the financial services industry is one of the most AI intensive sector from fraud detection to hyper personalized banking, as well as explainability, where regulatory compliance is critical. How do you ensure that AI models used for credit scoring, lending, or fraud detection are explainable and auditable?

And how does a company like the NatWest Group mitigate those risks while still fostering innovation? 

Paul Dongha: Yeah, Sanjay. I mean, it's a really good question. So, the technology is used for things like credit scoring, credit lending and so on, they predominantly fall into the traditional AI camp. So before the generative AI, we had predictive AI. So that's where AI is, in effect, helping to make decisions like credit lending decisions.

Now, fortunately in that technology area is much less complex than generative AI and transformer architectures. And it's been around for quite a long time. So if we go back even sort of 5 to 10 years, that was the first wave of predictive AI. And as that came up and became widely used in banks, it allowed financial services institutions to build the frameworks to validate those models and to put those models in their enterprise risk management frameworks and so on and so forth.

So if we step back a little bit, risk management as a practice within banks has a long history, 20, 30 years. So we have things like model risk management, model validation, risk tolerance, risk appetite, model risk policies. All of these kind of artifacts and processes were established some time ago.

So when predictive AI came about 10 years ago, they were adapted to make sure that they could deal with things like credit lending and credit scoring and insurance underwriting decisions. So some of the things that the financial institutions, including NatWest Bank have, is they'll have a robust model validation process and team.

And what that is, is professionals who, mathematically qualified professionals that can look at a model. When I say model, I mean exactly that, a predictive lending model. And they can reason about its behavior. They can look at the data that goes into it. They can look at the results. They test it.

They look at the limits of it. They look at how it behaves. And in partnership with the development team, they ensure that credit lending decisions that the model makes are understood, predictable, and to some extent can be explained as well as possible. And there's technology tooling you can use for that.

So within a risk tolerance and a risk appetite, a model will be created. It will be overseen by model validation. This will be weaved into the risk management that the bank already has in place. And those models will be tested rigorously with the folks that develop the models. So it's quite a well understood and very robust process.

And on top of that we have the audit function of a bank. Now the audit function of a bank looks across the bank at all sorts of different projects and processes in place and really ensures that it's robust and fit for purpose as well. Some organizations call that a third line of defense. So it's really an extra layer of checking and validation to ensure that everything that should have been done to mitigate risk has been done.

And typically the audit function will have a reporting line into the bank, into quite a senior level and can advise the board to ensure that things have been done properly. On top of that, of course, there's things like model monitoring. So when we put a credit lending system live and it's in operation, we don't just leave it and not look at it.

It's monitored very closely, both using technology and both using our risk management teams to ensure that should the model start behaving in a way that we don't want it to, because models drift, aI models have this thing called either data drift or concept drift, whereby over time they'll behave in ways that slightly move away from their behavior when they were launched.

So we have, model monitoring and we have technology and tooling to allow us to identify early on if the model is starting to do that. And if it is, we'll intervene. The development teams will intervene. They'll retrain the model and relaunch a model. And that has the same level of scrutiny as the development practices do as well.

So that's quite a rich and large practice that's in banks. And what banks have done more recently, and I've done it at two banks now, is we have an ethics panel, or an ethics committee. And the role of that committee is to really early on look when we're deciding to build a model before we build it, we'll look at the problem we're trying to solve.

We'll look at what AI solution we think we're gonna implement, and then we'll see if there are any unanticipated cons, bad consequences that could come from it. And there are techniques to try and unearth and try and filter these out. 'Cause no one builds AI with a bad intention of a bad consequence.

Bad consequences are unanticipated. So we have techniques to using people from different diverse backgrounds to look at a problem and say, "ah, actually could something happen that is not really purposefully designed?" And using that committee or panel will surface that and we'll get the development teams to think about that and then we'll advise risk management that this could happen and that maybe they should look at risk mitigation techniques for any unanticipated consequences.

And there are other things that we do in the model development lifecycle, but I think an ethics committee or an ethics board is really important.

Sanjay Podder: Thanks for a very elaborate response that really is very helpful. You know, at the same time, I wonder, you know, with generative AI in particular, there are a couple of new risks coming up, right? Hallucination. While grift is understood, but hallucination is a new kind of risk .Or harmful content. Or the main topic that we'd love to discuss today is the energy use because traditionally we have always been thinking about the training of AI model that needs a lot of energy, but now we are looking at inferencing, right?

You rightly pointed out that AI is now getting democratized, right? Everybody is using prompting, inferencing, and so many of the study shows that the energy use and the emissions, therefore, are happening more while you prompt rather than when you train, right? So a lot of new things coming up with Gen AI and especially hallucination as well.

And while the traditional AI, there has been a lot of rigor around which you put the, you know, the safeguards, the guardrails and everything, you know, any thought on how that game changes when we talk about gen AI, right? We shift from traditional AI to Gen AI, you suddenly look at a larger, you know, landscape of risk.

And in your own personal experience, you know, how has that shaped up in financial services industry? How are people trying to manage that risk?

Paul Dongha: Yeah, again, really good question, Sanjay. So I think that, in my mind, there are two things that are major things to look at, right? When it comes to the democratization of AI, let's put it this way, 

I think the use of copilots in organizations is gonna be big, right? There's Office 365 Copilot, Dynamics 365, GitHub, GitLab copilot. Copilots, I think, are gonna be everywhere, and most, if not all, knowledge workers will have at least a co-pilot of some sorts.

There'll be copilots for teams. There'll be copilots for departments. There'll be, who knows, but the underlying message is that there's a huge, going to be a huge proliferation of copilots that, I think of them as knowledge assistants, knowledge worker assistants. So they'll be like part of the team.

Everyone will have one. Now, when we look at each of those, every time a prompt goes in there, it's a prompt to a large language model, right? Like a GPT model. And there you have. So if you imagine the proliferation of these and the continual prompting of these, it's really quite mind blowing how much it's going to be used.

It's not even a every knowledge worker has one. We're looking at proliferation of these. So I think once you get your head around the use of that, you think, "wow, that's huge." But I think the second thing I'd like to mention on top of Copilot is actually when we look at agentic AI. So, I think that's going to be the word of 2025, it seems.

Agents.

 In fact, my PhD was all about AI agents in the 90s. Believe it or not, we used to talk about AI agents back in the 90s. We couldn't build them. So, I think we will have agents and multi agent systems, where you'll have, each agent could be a copilot. We'll have the ability to solve problems in a very narrow domain, but it will have the ability to talk to other co pilots, ie other agents. And that inter-agent communication, maybe delegating a task or delegating a query. The ability for agents to cooperate and communicate, that is going to open up a vast amount more computing happening through, through transformer models.

And one can only guess how vast that's going to get. So, you know, when you look at organizations like NVIDIA and they look to the future and talk about your millions of agents being co workers, you know, when you take it to that extreme, the mind, it really stretches the mind to think, "wow, that's how much compute this is going to use."

So I think those two, we call them inflection points, call them what you will, but in my mind, those are two massive things that just mean that there's so much more compute is going to happen. So if we look on the compute side. Now, we know the cost per tokens are going down. Fortunately, they're going down dramatically.

And we've seen DeepSeek and so on launching in late January. So I think, Sanjay, to your point, we shouldn't have to worry about the training costs anymore. I think the training costs are going to go down. The massive thing is going to be the inference time, compute. That thing of using them, right, indiscriminately, that's where all the cost is going to be.

That's where all the usage is going to be. And that's why all three big tech companies, google, Amazon and Microsoft are invested in nuclear power plants and you've heard about the, off the coast of Pennsylvania, Three Mile Island, I think it was Microsoft, I can't recall which of them brought that sort of, disused nuclear, mid sized nuclear power reactor.

So big tech companies have got it. They are betting that this is going to happen. There's going to be an unsustainable amount of compute. So from the grid, they can't satisfy the demand, so they're going to generate their own electricity. Fortunately, it's almost renewable, right? So it's good. So, however, that's just one part of how to deal with it.

The other part is going to be cooling. Cooling datacenters are predominantly using water. I mean, there are technologies now that, that are trying to reduce that, but we really need to pay serious attention to that because even training GPT 4, there was water crisis issues there. So we have to really be careful about the inference type cooling.

And I think the other one on carbon emissions, look we know big tech companies still have data centers that use carbon-fuelled electricity. Let's put it that way. And big tech companies use renewable offsets so that they have a better story when it comes to reporting climate emissions.

But actual location-based climate emissions is still growing. It's still a problem. So in different geographies around the world, you'll find, Sanjay, as you well know, you'll find that there are heavy uses of carbon-intensive electricity sources. So I think we all have to look at that and organizations like banks and large organizations that are going to be big users of these tools need to think, "how do we use co pilots?

When do we use co pilots? How can we be efficient in that use of co pilots?" Because I don't just think it's a big tech problem to solve. I think just like climate all over the world is everyone's responsibility. We, you know, we have electric cars now. We're careful about how we recycle.

We don't use, you know, we don't use plastic bags indiscriminately at supermarkets. You know, we're careful about how we go about our daily life. We use packaging that's eco-friendly. I think the same mindset shift needs to happen in the use of copilots in organizations. Everyone needs to think "what role do I play in ensuring that I just don't use them for absolutely everything just because I can?

But I use those technologies wisely for when I need to use them. And when I don't, I'll use something else." Standards bodies have a role to play. I think governments have a role to play. I think organizations do. I think individuals have a role to play. I think model developers have a role to play. And there are all sorts of things that are in flight on that, making models more, more efficient, distillation of models, quantization of models.

So there's many threads that we can pull on. Fortunately, GreenOps and FinOps, as you know, Sanjay, through the Green Software Foundation, amazing work that Green Software Foundation has done. And I'm a big promoter and big believer that we should continue down that track. I'll stop there because I've probably, talked a lot about that.

Sanjay Podder: Great insights. Thanks for sharing Paul, you know, and Paul, my other question would be regarding regulations and regarding, you know, recently the UK government's AI white paper advocating for pro-innovation approach to AI regulation. Now, given your experience advising policymakers, what do you believe are the key policy shifts needed to ensure AI innovation while maintaining trust and ethical safeguards, as well as reducing the impact AI is having on the environment by decarbonizing the software industry?

Let me hear your perspective here. 

Paul Dongha: That's an interesting question. I think when it comes to policies, they're really important. So I certainly believe the organization should increase and continue to have dialogue with regulators. So I think the need to continue talking to regulators is of paramount importance.

So when we look at financial services in industry, financial services has the luxury of rewarding employees well, especially in technology. and we create a lot of models. We're close to our customers. We have a good sense of how to create models and use AI to benefit our customers. and I think in keeping the dialogue open with regulators to say, "look, this is how we're using AI.

These are the issues we see. This is how we're tackling issues. This is how we're doing risk management for AI, using generative AI" and so on and so forth. I think that dialogue has to continue so that we can help them, those regulators, understand our pain points much more. So this, the take off of AI, this proliferation of generative AI, I think demands that we continue to have that dialogue, and probably more frequently than we're currently having.

And I'll give you one, I guess, one example. The regulators are keen to say, "look, banks and organizations should treat their customers fairly." So fairness, absolutely 100%, that's at the forefront of our mind as a bank. And we want to promote fairness. But fairness is a contested concept. There's no single agreed upon definition of fairness.

And how do you define bias and how do you define fairness? And there's concepts like unfair bias. And there may be things like fair bias. So I think it's a really nuanced conversation how you go about approaching fairness and how you define fairness thresholds. And those kind of conversations need to happen because the regulatory guidelines do not prescribe how to be fair or what constitutes fairness.

And that's right, that they shouldn't have to be prescriptive. But I think organisations should talk internally about how they are fair. And should talk to regulators about how they are being fair and their approaches to fairness. Because there is a classic problem that the accuracy and fairness, there's a trade-off between them.

So how good a model is, how accurate it is, compared to notions of fairness, there's usually a trade-off. So you can try and be fairer, but you might not have such an accurate model. Or you can have a model that's very accurate, but actually might not be fair. And managing that trade-off is quite nuanced and difficult to do.

So I think we need to elevate the conversation both in risk management and to our policy experts and governments that talk about how we manage that trade-off. And there are similar trade-offs that I think we'll see around sustainability, where you might want the most accurate model. And you might say, okay, in a high stakes use case, accuracy is super important, but sustainability is less important.

So I'll implement a model that might be very compute intensive because I'm very concerned about having a hallucination free, superbly well curated answer, that's compute intensive to generate. Because it's a high stakes scenario and I don't want to make an error. Or on the other hand, you might say, actually some uses of gen AI, maybe a kind of a back office, low risk process,

maybe I can sacrifice a little bit of accuracy and therefore have a solution that is not so compute intensive, because it's not a high stakes use case. And I think we're seeing trade-offs there that will probably have to be equally managed in the right way. So I think being open and talking to regulators about that is super important.

And I think go, let's go back to agents, you know, the use of agents. So this is a really important point because if we imbue agents and we now talk about AI agents, how are they different from normal generative AI? Well, people often say two or three things. They say agents have reasoning.

So that means that an agent does far more on its own through creating a plan of things it does, and we've seen it in the GPT 4o and in DeepSeek. And secondly, it has autonomy. So it has more autonomy than we would otherwise give AI systems. So when you combine autonomy and reasoning together, you have quite sophisticated AI systems.

How do you deal with that sophistication? What is the agent accountable for, and what is the human accountable for? Has that line shifted? And really it shouldn't shift, right? I think and I have a belief that AI should always be a tool and it augments knowledge workers and the work they do. So accountability stays with people but when you do give agents an increased role and more autonomy, where has that accountability gone?

Has it shifted? In a multi agent system, where is accountability? When you've got agents talking to each other, asking for things to be done and then there's agents doing it. Well, you know, I think we're just starting to scratch the surface on these kind of problems and I think we will need the help of organizational theorists, psychologists, sociologists to talk to us around how we think about organizations that are comprised of things like that.

I think we're just scratching the surface, Sanjay, on multi agent systems, for example, 

Sanjay Podder: Yeah, absolutely. And that's a wonderful note to, you know, conclude our podcast today. You're entering the brave world of human plus machines, right? And you know, how we are going to navigate it. You know, that was very insightful, Paul, absolutely insightful. So, Paul, with all the insights that you have shared with us, especially around responsibility and financial services, and all of us will be thinking about the future a lot. I'm sure of it. Your work is shaping the future of AI ethics, sustainability and innovation in banking. So before we wrap up, what's your final message for technology leaders looking to integrate responsible AI into their organizations?

Paul Dongha: I think there are two messages here, Sanjay. I think some people I've worked with and alongside think of responsible AI or AI ethics as something that slows down innovation, right? Like, we're the guys that say, no, you can't do something. I think that mind shift needs to change. I think I really believe it has to.

And I'll tell you why. To get generative AI models to do the right thing, right? To be accurate, you kind of need people like AI ethics professionals in the room with the development teams. We together need to make AI systems accurate and behave the right way. So you can't just leave it as a technology exercise in and of itself.

It has to be other people involved. And AI ethics people are technical as well, right? I'm very technical as well. So, we need to be part of making systems more competent. This thing about competency is so pivotal. Generative AI that is probabilistic, it's in some ways non-deterministic. So actually, responsible AI techniques are there to make the models better and competent.

We're not there to stop things from happening. This is a real, it's a real mind shift thing that needs to happen in organisations to get boards to understand that actually we're part of developing models, not part of slowing down innovation. And I think the second takeaway, if I was talking to senior folk in an organisation, is have AI in the boardroom, right.

Have a knowledgeable AI strategy, responsible AI person in the boardroom and equip your boards with the level of knowledge that they need to deal with this. I do believe it is, people talk about it as a transformational technology, some people talk about it as the new electricity or something, I don't know if it's that, but what I do know is I really do believe it's going to be literally everywhere. So organizations need to have this as a priority on their board to understand how they can adopt it, how they can use it, and how they can keep up with their peer group of organizations who will undoubtedly be trying to adopt it. So I think those are my two takeaways. 

Sanjay Podder: Great, Paul. And where can the audience find more about your work and what NatWest is doing as well? 

Paul Dongha: Oh, well, you can follow me on LinkedIn. I do post from time to time. I have particular views on technology. I want to point out that very recently, I think it was literally a week ago, we published our AI ethics code of conduct on the NatWest website. So if people go to www.natwest.com, you will find on our sustainability pages, our AI ethics code of conduct that talks about AI ethics and data ethics and what we as an organization are doing internally to promote that and to make sure our risk management processes are there.

So I really encourage people to do that. And there are many, online resources to learn about more about responsible AI that you can find online. There's loads of them. 

Sanjay Podder: Thank you so much, Paul. This was really great. Thanks for your contribution. And we really appreciate you coming on to CXO Bytes.

Paul Dongha: Great. Thank you, Sanjay, it's been a pleasure and thank you very much. Bye bye for now.

Sanjay Podder: Awesome. That's all for this episode of CXO Bytes. All the resources of this episode are in the show description below, and you can visit podcast.greensoftware.foundation to listen to more episodes of CXO Bytes. See you all in the next episode. Bye for now. 

Hey, everyone. Thanks for listening. Just a reminder to follow CXO Bytes on Spotify, Apple, YouTube, or wherever you get your podcasts. And please do leave a rating and review if you like what we're doing. It helps other people discover the show. And of course, we want more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. Thanks again, and see you in the next episode.