Back to blog home

Responsible AI Podcast Ep.1 - “AI Ethics is a Team Sport”

In this episode of the Responsible AI Podcast, we have Maria Axente who is the Responsible AI lead for PwC UK. She works with AI practitioners, conducts research in areas like AI audits and gender in AI, collaborates with organizations like the World Economic Forum, and consults for many of the largest businesses in the UK to help them implement responsible AI. Her work puts her at the crux of all things AI, able to see not just the technology, but the context as well. We spoke with Maria about what responsible AI means, how its importance is often overlooked, and creative ways to incentivize teams to implement AI ethically. 

Definition of Responsible AI

The term “Responsible AI” has been gaining a lot of traction over the past few years. “It’s a positive surprise,” said Maria, whose team put together the first framework for Responsible AI in 2017. 

However, looking at the definitions of Responsible AI that are circulating, Maria noticed that most don’t go far enough. “Most of the definitions of Responsible AI are focused on the outcome that AI is going to deliver,” she said—they want the outcome to be fair, beneficial, safe, secure, and accurate. The problem is that this definition doesn’t explain how to get there. Instead, Maria said, “Let’s focus on defining Responsible AI through the processes we need to set up to achieve that outcome.” 

In creating a framework for these processes, Maria’s team identified three main layers: 

  1. Ethics and regulation: How do you identify the right ethical principles and ensure that your use cases are compliant with the laws and regulations from different jurisdictions?
  2. Governance and risk mitigation: How do you govern AI systems end to end? Because AI has self-agency, we can’t treat it like a traditional technology. This layer includes being able to identify and proactively mitigate risks across the lifecycle of the system. “Risk management is a massively overlooked discipline outside financial services,” Maria said. 
  3. Performance: How are you able to test and monitor the performance of your application in a continuous manner? This can be viewed as setting a precedent for an AI audit and quality assurance. The organization needs to be in a good position to acknowledge how well their systems will perform against regulations.

Embedding ethics, governance, and risk management into AI systems is a holistic effort—not a one-off. As Maria said, “If it’s not responsible, it shouldn’t be AI at all.”  

The Challenges of Implementing Responsible AI

Maria believes that most businesses are not ready for AI. “The biggest challenge is complexity,” she said. This applies in several ways. First, there’s the complexity of the way organizations operate, including the sometimes opaque internal systems that can be difficult to change. Then there’s the complexity of AI, which can’t be confined to the IT department like normal software. Understanding this fragmentation is key, but as Maria explained, we’re trained to work in niches rather than seeing the connection between the dots. 

“AI will not only bring benefits, it will disrupt what we do and who we are,” said Maria. This disruptive nature is the second biggest challenge. “AI has agency, is autonomous, adapts to the external environment, and interacts with the environment,” Maria explained. AI pushes us to think about real-time, cyclical processes. But most business processes (unlike nature and life) require a “linear mindset.” Moving from linear thinking to cyclical, connective thinking will be one of the biggest changes that AI requires of us. 

Maria identified a third major challenge as enterprise readiness. While more and more businesses want to implement AI, most of them are still in the proof-of-concept mode, with a handful of applications. Until AI is close to achieving “critical mass” when it comes to enterprise strategy, it’s going to be hard to have incentives for implementing AI responsibly. This is true at either end of the reporting chain. C-suite executives want to know why they should take on the extra overhead, and data scientists need a good reason to think beyond their one core objective, which is optimizing the accuracy of their models. 

How to Create Positive Incentives

Change won’t happen without negative incentives (in the form of regulations) — or compelling positive incentives that align an organization. Maria discussed some of the ways this can happen.

Ethical businesses have a competitive advantage   

It’s been demonstrated that Responsible AI that has risk, governance, and ethics embedded has the potential to create a distinct competitive advantage. But it’s too early to see hard data on this. “It’s a bit of a leap of faith,” Maria explained. 

Ethics can be intangible, but that doesn’t mean it’s not important. Maria noted that we used to talk about whether business ethics was “worth it,” but now it’s agreed upon that ethical businesses are also good businesses. She sees the same thing happening with Responsible AI. Businesses “will be able to retain loyal customers by providing transparency, equity, fairness, and safety when using AI.” Maria is optimistic about this pressure coming from Gen Z consumers. They have seen where AI can go wrong, and in some cases been personally affected. Safe AI applications will be fundamental to their existence. 

Sometimes simply being aware is an incentive

“There are so many similarities between a doctor’s work and a data scientist’s work,” said Maria, in terms of the way their work has a direct impact on people’s lives and needs to uphold a high level of ethics and responsibility. “The numbers on the screens are not just numbers, they’re people’s lives,” said Maria. But data scientists haven’t always been taught to think this way. It’s a matter of increasing awareness. 

When Maria’s team explains this to data scientists they consult with, the responses have been eye-opening. Data scientists welcomed the knowledge of how their work affects people. They were happy to consider other perspectives, and the responsibility of being proactive about fighting bias, unfairness, and harm. In other words, they just needed to be more aware.

Keep people excited, engaged, and rewarded

Implementing Responsible AI is a big change, and for any change, keeping morale high is important. Financial incentives, like a bonus for implementing Responsible AI, can certainly help. But there are other things teams can do too, like offering time off or giving employees a chance to work with charities. Maria thinks that pro bono work in particular can get the team thinking about the positive impact of technology, not just the negatives. For example, they could help the community use machine learning or teach underprivileged students to code. And sometimes, the team just wants to go to Disneyland as a reward for becoming internal experts at Responsible AI. Why not?

What teams should think about when building AI solutions

“We need to go from ‘can I build it,’ which is the mantra of Silicon Valley, to ‘should I build it,’” said Maria. This means having a foundation in ethics, if possible (“Read Plato’s Republic,” Maria recommends) — and understanding the consequences by educating yourself on examples of the negative impacts of ML. 

To have a robust approach, a framework is important to give you guidance and stability over time. And you need the energy and passion to make it happen. While there will always be constraints and frustrations to deal with at work, it’s still possible to find an inner motivation to take the framework and make it your own. 

“AI ethics is a team sport,” said Maria. Change has to come both from the top-down and the bottom-up. Each team will have its own culture, so rather than changing that significantly, focus on where the gaps are. How can you add a few extra actions in your process so you can reflect, discuss, and debate? You don’t have to overcomplicate things with giant impact assessments and questionnaires. Focus on things that make common sense and are simple and elegant to implement. 

The three things that can make the biggest difference for Responsible AI

Putting it all together, Maria explained that she relies on the three main things to keep her optimistic. 

  1. Visionary leaders who want to differentiate their business.
  2. Regulation, i.e. changing the rules of the game. It might be a negative incentive, but it will move the needle and get big companies to react.
  3. Society. Maria hopes there will be a “profound change” in the years to come that starts from the grassroots. “It’s about the application of AI in the government, and the impact it will have on us as citizens.” She encourages everyone “to play an active role: push back, question, hold accountable, participate.” 

“If we have these, in the next 5 years hopefully responsible AI will be the only way of doing AI,” Maria said. We’re hopeful, too. 

For previous episodes, please visit our Resource Hub.

If you have any questions or would like to nominate a guest, please contact us.